Aug 13 00:41:49.159067 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:41:49.159117 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:41:49.159136 kernel: BIOS-provided physical RAM map: Aug 13 00:41:49.159151 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 13 00:41:49.159164 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 13 00:41:49.159178 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 13 00:41:49.159199 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 13 00:41:49.159213 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 13 00:41:49.159228 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd32afff] usable Aug 13 00:41:49.159275 kernel: BIOS-e820: [mem 0x00000000bd32b000-0x00000000bd332fff] ACPI data Aug 13 00:41:49.159662 kernel: BIOS-e820: [mem 0x00000000bd333000-0x00000000bf8ecfff] usable Aug 13 00:41:49.159680 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Aug 13 00:41:49.159694 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 13 00:41:49.159709 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 13 00:41:49.159733 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 13 00:41:49.159749 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 13 00:41:49.159764 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 13 00:41:49.159780 kernel: NX (Execute Disable) protection: active Aug 13 00:41:49.159796 kernel: APIC: Static calls initialized Aug 13 00:41:49.159811 kernel: efi: EFI v2.7 by EDK II Aug 13 00:41:49.159827 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32b018 Aug 13 00:41:49.159842 kernel: random: crng init done Aug 13 00:41:49.159860 kernel: secureboot: Secure boot disabled Aug 13 00:41:49.159876 kernel: SMBIOS 2.4 present. Aug 13 00:41:49.159892 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Aug 13 00:41:49.159908 kernel: DMI: Memory slots populated: 1/1 Aug 13 00:41:49.159923 kernel: Hypervisor detected: KVM Aug 13 00:41:49.159938 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:41:49.159954 kernel: kvm-clock: using sched offset of 15208458971 cycles Aug 13 00:41:49.159971 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:41:49.159987 kernel: tsc: Detected 2299.998 MHz processor Aug 13 00:41:49.160003 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:41:49.160033 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:41:49.160049 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 13 00:41:49.160065 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Aug 13 00:41:49.160081 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:41:49.160097 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 13 00:41:49.160112 kernel: Using GB pages for direct mapping Aug 13 00:41:49.160128 kernel: ACPI: Early table checksum verification disabled Aug 13 00:41:49.160144 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 13 00:41:49.160170 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 13 00:41:49.160187 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 13 00:41:49.160204 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 13 00:41:49.160220 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 13 00:41:49.160237 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Aug 13 00:41:49.160282 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 13 00:41:49.160303 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 13 00:41:49.160320 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 13 00:41:49.160337 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 13 00:41:49.160353 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 13 00:41:49.160370 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 13 00:41:49.160386 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 13 00:41:49.160403 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 13 00:41:49.160419 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 13 00:41:49.160435 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 13 00:41:49.160455 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 13 00:41:49.160472 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 13 00:41:49.160489 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 13 00:41:49.160505 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 13 00:41:49.160520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 00:41:49.160535 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 13 00:41:49.160551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 13 00:41:49.160566 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Aug 13 00:41:49.160583 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Aug 13 00:41:49.160605 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] Aug 13 00:41:49.160621 kernel: Zone ranges: Aug 13 00:41:49.160638 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:41:49.160654 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:41:49.160670 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 13 00:41:49.160687 kernel: Device empty Aug 13 00:41:49.160702 kernel: Movable zone start for each node Aug 13 00:41:49.160718 kernel: Early memory node ranges Aug 13 00:41:49.160733 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 13 00:41:49.160748 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 13 00:41:49.160769 kernel: node 0: [mem 0x0000000000100000-0x00000000bd32afff] Aug 13 00:41:49.160786 kernel: node 0: [mem 0x00000000bd333000-0x00000000bf8ecfff] Aug 13 00:41:49.160801 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 13 00:41:49.160818 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 13 00:41:49.160834 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 13 00:41:49.160851 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:41:49.160868 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 13 00:41:49.160884 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 13 00:41:49.160901 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Aug 13 00:41:49.160922 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 13 00:41:49.160938 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 13 00:41:49.160955 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 00:41:49.160973 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:41:49.160989 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:41:49.161005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:41:49.161029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:41:49.161046 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:41:49.161063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:41:49.161084 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:41:49.161101 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:41:49.161117 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:41:49.161134 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:41:49.161150 kernel: CPU topo: Max. threads per core: 2 Aug 13 00:41:49.161168 kernel: CPU topo: Num. cores per package: 1 Aug 13 00:41:49.161184 kernel: CPU topo: Num. threads per package: 2 Aug 13 00:41:49.161201 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 00:41:49.161218 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 00:41:49.161238 kernel: Booting paravirtualized kernel on KVM Aug 13 00:41:49.161493 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:41:49.161512 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:41:49.161558 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 00:41:49.161575 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 00:41:49.161591 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:41:49.161607 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:41:49.161623 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:41:49.161643 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:41:49.161667 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:41:49.161683 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 00:41:49.161700 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:41:49.161716 kernel: Fallback order for Node 0: 0 Aug 13 00:41:49.161732 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Aug 13 00:41:49.161748 kernel: Policy zone: Normal Aug 13 00:41:49.161764 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:41:49.161781 kernel: software IO TLB: area num 2. Aug 13 00:41:49.161813 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:41:49.161831 kernel: Kernel/User page tables isolation: enabled Aug 13 00:41:49.161849 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:41:49.161869 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:41:49.161886 kernel: Dynamic Preempt: voluntary Aug 13 00:41:49.161904 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:41:49.161921 kernel: rcu: RCU event tracing is enabled. Aug 13 00:41:49.161939 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:41:49.161957 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:41:49.161988 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:41:49.162004 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:41:49.162030 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:41:49.162049 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:41:49.162068 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:41:49.162087 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:41:49.162105 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:41:49.162129 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:41:49.162147 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:41:49.162166 kernel: Console: colour dummy device 80x25 Aug 13 00:41:49.162184 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:41:49.162203 kernel: ACPI: Core revision 20240827 Aug 13 00:41:49.162221 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:41:49.162240 kernel: x2apic enabled Aug 13 00:41:49.162275 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:41:49.162291 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 13 00:41:49.162307 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 00:41:49.162330 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 13 00:41:49.162348 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 13 00:41:49.162366 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 13 00:41:49.162385 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:41:49.162402 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Aug 13 00:41:49.162420 kernel: Spectre V2 : Mitigation: IBRS Aug 13 00:41:49.162439 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:41:49.162455 kernel: RETBleed: Mitigation: IBRS Aug 13 00:41:49.162477 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:41:49.162494 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Aug 13 00:41:49.162511 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:41:49.162528 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 00:41:49.162545 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:41:49.162566 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:41:49.162586 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:41:49.162603 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:41:49.162619 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:41:49.162641 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:41:49.162657 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:41:49.162675 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:41:49.162693 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:41:49.162712 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:41:49.162729 kernel: landlock: Up and running. Aug 13 00:41:49.162747 kernel: SELinux: Initializing. Aug 13 00:41:49.162767 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:41:49.162785 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:41:49.162808 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 13 00:41:49.162827 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 13 00:41:49.162847 kernel: signal: max sigframe size: 1776 Aug 13 00:41:49.162865 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:41:49.162882 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:41:49.162899 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:41:49.162917 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:41:49.162936 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:41:49.162954 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:41:49.162976 kernel: .... node #0, CPUs: #1 Aug 13 00:41:49.162996 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 00:41:49.163040 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 00:41:49.163057 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:41:49.163073 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 13 00:41:49.163090 kernel: Memory: 7564260K/7860552K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 290712K reserved, 0K cma-reserved) Aug 13 00:41:49.163106 kernel: devtmpfs: initialized Aug 13 00:41:49.163123 kernel: x86/mm: Memory block size: 128MB Aug 13 00:41:49.163145 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 13 00:41:49.163164 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:41:49.163183 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:41:49.163202 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:41:49.163221 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:41:49.163240 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:41:49.164735 kernel: audit: type=2000 audit(1755045704.822:1): state=initialized audit_enabled=0 res=1 Aug 13 00:41:49.164757 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:41:49.164774 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:41:49.164798 kernel: cpuidle: using governor menu Aug 13 00:41:49.164816 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:41:49.164835 kernel: dca service started, version 1.12.1 Aug 13 00:41:49.164853 kernel: PCI: Using configuration type 1 for base access Aug 13 00:41:49.164873 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:41:49.164892 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:41:49.164911 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:41:49.164930 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:41:49.164948 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:41:49.165008 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:41:49.165039 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:41:49.165059 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:41:49.165078 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 00:41:49.165096 kernel: ACPI: Interpreter enabled Aug 13 00:41:49.165115 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:41:49.165134 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:41:49.165153 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:41:49.165172 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 00:41:49.165195 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 13 00:41:49.165213 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:41:49.165510 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:41:49.165712 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 00:41:49.165902 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 00:41:49.165925 kernel: PCI host bridge to bus 0000:00 Aug 13 00:41:49.166123 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:41:49.166348 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:41:49.166516 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:41:49.166690 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 13 00:41:49.166854 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:41:49.167077 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Aug 13 00:41:49.167305 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Aug 13 00:41:49.169395 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Aug 13 00:41:49.169606 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 00:41:49.169806 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Aug 13 00:41:49.170004 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 00:41:49.170234 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Aug 13 00:41:49.170479 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:41:49.170675 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Aug 13 00:41:49.170865 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Aug 13 00:41:49.171072 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 13 00:41:49.172309 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Aug 13 00:41:49.172532 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Aug 13 00:41:49.172560 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:41:49.172579 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:41:49.172598 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:41:49.172623 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:41:49.172641 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 00:41:49.172660 kernel: iommu: Default domain type: Translated Aug 13 00:41:49.172679 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:41:49.172697 kernel: efivars: Registered efivars operations Aug 13 00:41:49.172715 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:41:49.172734 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:41:49.172752 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 13 00:41:49.172770 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 13 00:41:49.172792 kernel: e820: reserve RAM buffer [mem 0xbd32b000-0xbfffffff] Aug 13 00:41:49.172810 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 13 00:41:49.172828 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 13 00:41:49.172845 kernel: vgaarb: loaded Aug 13 00:41:49.172886 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:41:49.172905 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:41:49.172924 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:41:49.172942 kernel: pnp: PnP ACPI init Aug 13 00:41:49.172961 kernel: pnp: PnP ACPI: found 7 devices Aug 13 00:41:49.172984 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:41:49.173003 kernel: NET: Registered PF_INET protocol family Aug 13 00:41:49.173032 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:41:49.173051 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 00:41:49.173069 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:41:49.173088 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:41:49.173107 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 00:41:49.173126 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 00:41:49.173145 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:41:49.173167 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:41:49.173185 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:41:49.173203 kernel: NET: Registered PF_XDP protocol family Aug 13 00:41:49.176461 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:41:49.176659 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:41:49.176833 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:41:49.177145 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 13 00:41:49.177465 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 00:41:49.177566 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:41:49.177592 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:41:49.177612 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 13 00:41:49.177632 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:41:49.177707 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 13 00:41:49.177740 kernel: clocksource: Switched to clocksource tsc Aug 13 00:41:49.177760 kernel: Initialise system trusted keyrings Aug 13 00:41:49.177779 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 00:41:49.177804 kernel: Key type asymmetric registered Aug 13 00:41:49.177823 kernel: Asymmetric key parser 'x509' registered Aug 13 00:41:49.177841 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:41:49.177860 kernel: io scheduler mq-deadline registered Aug 13 00:41:49.177878 kernel: io scheduler kyber registered Aug 13 00:41:49.177896 kernel: io scheduler bfq registered Aug 13 00:41:49.177929 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:41:49.177949 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 00:41:49.179291 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 13 00:41:49.179345 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 13 00:41:49.179578 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 13 00:41:49.179605 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 00:41:49.179797 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 13 00:41:49.179820 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:41:49.179840 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:41:49.179859 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 00:41:49.179878 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 13 00:41:49.179897 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 13 00:41:49.180109 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 13 00:41:49.180135 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:41:49.180154 kernel: i8042: Warning: Keylock active Aug 13 00:41:49.180173 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:41:49.180193 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:41:49.181444 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 00:41:49.181631 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 00:41:49.181829 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T00:41:48 UTC (1755045708) Aug 13 00:41:49.182007 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 00:41:49.182031 kernel: intel_pstate: CPU model not supported Aug 13 00:41:49.182051 kernel: pstore: Using crash dump compression: deflate Aug 13 00:41:49.182070 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 00:41:49.182088 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:41:49.182107 kernel: Segment Routing with IPv6 Aug 13 00:41:49.182126 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:41:49.182144 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:41:49.182168 kernel: Key type dns_resolver registered Aug 13 00:41:49.182187 kernel: IPI shorthand broadcast: enabled Aug 13 00:41:49.182205 kernel: sched_clock: Marking stable (4113004804, 176972084)->(4584673105, -294696217) Aug 13 00:41:49.182223 kernel: registered taskstats version 1 Aug 13 00:41:49.182242 kernel: Loading compiled-in X.509 certificates Aug 13 00:41:49.184133 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:41:49.184154 kernel: Demotion targets for Node 0: null Aug 13 00:41:49.184174 kernel: Key type .fscrypt registered Aug 13 00:41:49.184193 kernel: Key type fscrypt-provisioning registered Aug 13 00:41:49.184218 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:41:49.184236 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Aug 13 00:41:49.184269 kernel: ima: No architecture policies found Aug 13 00:41:49.184288 kernel: clk: Disabling unused clocks Aug 13 00:41:49.184307 kernel: Warning: unable to open an initial console. Aug 13 00:41:49.184327 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:41:49.184352 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:41:49.184370 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:41:49.184394 kernel: Run /init as init process Aug 13 00:41:49.184413 kernel: with arguments: Aug 13 00:41:49.184431 kernel: /init Aug 13 00:41:49.184449 kernel: with environment: Aug 13 00:41:49.184467 kernel: HOME=/ Aug 13 00:41:49.184486 kernel: TERM=linux Aug 13 00:41:49.184504 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:41:49.184525 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:41:49.184554 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:41:49.184575 systemd[1]: Detected virtualization google. Aug 13 00:41:49.184593 systemd[1]: Detected architecture x86-64. Aug 13 00:41:49.184612 systemd[1]: Running in initrd. Aug 13 00:41:49.184632 systemd[1]: No hostname configured, using default hostname. Aug 13 00:41:49.184652 systemd[1]: Hostname set to . Aug 13 00:41:49.184671 systemd[1]: Initializing machine ID from random generator. Aug 13 00:41:49.184691 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:41:49.184715 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:41:49.184753 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:41:49.184777 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:41:49.184797 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:41:49.184818 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:41:49.184843 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:41:49.184864 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:41:49.184883 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:41:49.184903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:41:49.184919 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:41:49.184942 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:41:49.184973 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:41:49.184998 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:41:49.185026 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:41:49.185045 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:41:49.185064 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:41:49.185083 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:41:49.185103 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:41:49.185122 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:41:49.185142 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:41:49.185161 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:41:49.185185 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:41:49.185204 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:41:49.185224 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:41:49.185260 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:41:49.185282 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:41:49.185301 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:41:49.185321 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:41:49.185347 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:41:49.185366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:41:49.185391 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:41:49.185452 systemd-journald[207]: Collecting audit messages is disabled. Aug 13 00:41:49.185498 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:41:49.185518 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:41:49.185538 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:41:49.185558 systemd-journald[207]: Journal started Aug 13 00:41:49.185601 systemd-journald[207]: Runtime Journal (/run/log/journal/dce87f270b9d4a52b4c1c03f65236242) is 8M, max 148.9M, 140.9M free. Aug 13 00:41:49.187275 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:41:49.194613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:41:49.199018 systemd-modules-load[208]: Inserted module 'overlay' Aug 13 00:41:49.223232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:41:49.228306 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:41:49.235181 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:41:49.240626 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:41:49.250409 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:41:49.250452 kernel: Bridge firewalling registered Aug 13 00:41:49.250401 systemd-modules-load[208]: Inserted module 'br_netfilter' Aug 13 00:41:49.256705 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:41:49.263973 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:41:49.265820 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:41:49.271481 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:41:49.279702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:41:49.293965 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:41:49.297783 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:41:49.305648 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:41:49.310642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:41:49.342111 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:41:49.375393 systemd-resolved[246]: Positive Trust Anchors: Aug 13 00:41:49.375417 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:41:49.375501 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:41:49.383727 systemd-resolved[246]: Defaulting to hostname 'linux'. Aug 13 00:41:49.388712 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:41:49.397492 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:41:49.462299 kernel: SCSI subsystem initialized Aug 13 00:41:49.475285 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:41:49.487287 kernel: iscsi: registered transport (tcp) Aug 13 00:41:49.512285 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:41:49.512363 kernel: QLogic iSCSI HBA Driver Aug 13 00:41:49.535649 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:41:49.554760 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:41:49.561141 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:41:49.619696 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:41:49.624399 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:41:49.687346 kernel: raid6: avx2x4 gen() 17853 MB/s Aug 13 00:41:49.704309 kernel: raid6: avx2x2 gen() 18120 MB/s Aug 13 00:41:49.721633 kernel: raid6: avx2x1 gen() 14320 MB/s Aug 13 00:41:49.721682 kernel: raid6: using algorithm avx2x2 gen() 18120 MB/s Aug 13 00:41:49.739621 kernel: raid6: .... xor() 18810 MB/s, rmw enabled Aug 13 00:41:49.739687 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:41:49.763302 kernel: xor: automatically using best checksumming function avx Aug 13 00:41:49.947304 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:41:49.955861 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:41:49.959198 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:41:49.998029 systemd-udevd[455]: Using default interface naming scheme 'v255'. Aug 13 00:41:50.007795 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:41:50.013952 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:41:50.048657 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Aug 13 00:41:50.084487 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:41:50.086785 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:41:50.189064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:41:50.195732 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:41:50.321961 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Aug 13 00:41:50.330273 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:41:50.343289 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:41:50.361338 kernel: AES CTR mode by8 optimization enabled Aug 13 00:41:50.420138 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 13 00:41:50.442282 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:41:50.447936 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:41:50.448158 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:41:50.452731 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:41:50.463722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:41:50.473970 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 13 00:41:50.474331 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 13 00:41:50.474564 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 13 00:41:50.475378 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 13 00:41:50.476038 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:41:50.477992 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:41:50.495397 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:41:50.495493 kernel: GPT:17805311 != 25165823 Aug 13 00:41:50.495520 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:41:50.495544 kernel: GPT:17805311 != 25165823 Aug 13 00:41:50.495568 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:41:50.495592 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:41:50.496700 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 13 00:41:50.507320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:41:50.591785 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Aug 13 00:41:50.596671 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:41:50.611390 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Aug 13 00:41:50.632673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 13 00:41:50.643498 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Aug 13 00:41:50.643785 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Aug 13 00:41:50.651490 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:41:50.656392 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:41:50.660371 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:41:50.665559 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:41:50.678456 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:41:50.688849 disk-uuid[609]: Primary Header is updated. Aug 13 00:41:50.688849 disk-uuid[609]: Secondary Entries is updated. Aug 13 00:41:50.688849 disk-uuid[609]: Secondary Header is updated. Aug 13 00:41:50.706960 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:41:50.714402 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:41:50.728277 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:41:51.740339 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:41:51.741886 disk-uuid[610]: The operation has completed successfully. Aug 13 00:41:51.841386 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:41:51.841541 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:41:51.886538 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:41:51.914718 sh[631]: Success Aug 13 00:41:51.936624 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:41:51.936706 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:41:51.936734 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:41:51.950275 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Aug 13 00:41:52.043685 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:41:52.048380 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:41:52.069679 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:41:52.089291 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:41:52.089368 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (643) Aug 13 00:41:52.094337 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:41:52.094411 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:41:52.094437 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:41:52.118808 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:41:52.120149 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:41:52.122747 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:41:52.125013 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:41:52.128746 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:41:52.169277 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (676) Aug 13 00:41:52.173501 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:41:52.173568 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:41:52.173601 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:41:52.185593 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:41:52.186288 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:41:52.193502 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:41:52.292180 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:41:52.304367 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:41:52.402119 systemd-networkd[812]: lo: Link UP Aug 13 00:41:52.403313 systemd-networkd[812]: lo: Gained carrier Aug 13 00:41:52.407968 systemd-networkd[812]: Enumeration completed Aug 13 00:41:52.408131 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:41:52.411548 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:41:52.411556 systemd-networkd[812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:41:52.415194 systemd-networkd[812]: eth0: Link UP Aug 13 00:41:52.415491 systemd-networkd[812]: eth0: Gained carrier Aug 13 00:41:52.415510 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:41:52.423676 systemd[1]: Reached target network.target - Network. Aug 13 00:41:52.434602 systemd-networkd[812]: eth0: DHCPv4 address 10.128.0.26/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 00:41:52.456633 ignition[735]: Ignition 2.21.0 Aug 13 00:41:52.456651 ignition[735]: Stage: fetch-offline Aug 13 00:41:52.456707 ignition[735]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:52.459699 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:41:52.456722 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:41:52.467548 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:41:52.456866 ignition[735]: parsed url from cmdline: "" Aug 13 00:41:52.456872 ignition[735]: no config URL provided Aug 13 00:41:52.456882 ignition[735]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:41:52.456896 ignition[735]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:41:52.456907 ignition[735]: failed to fetch config: resource requires networking Aug 13 00:41:52.457486 ignition[735]: Ignition finished successfully Aug 13 00:41:52.506943 ignition[822]: Ignition 2.21.0 Aug 13 00:41:52.506960 ignition[822]: Stage: fetch Aug 13 00:41:52.507181 ignition[822]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:52.507198 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:41:52.507361 ignition[822]: parsed url from cmdline: "" Aug 13 00:41:52.507368 ignition[822]: no config URL provided Aug 13 00:41:52.507376 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:41:52.520182 unknown[822]: fetched base config from "system" Aug 13 00:41:52.507389 ignition[822]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:41:52.520196 unknown[822]: fetched base config from "system" Aug 13 00:41:52.507449 ignition[822]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 13 00:41:52.520206 unknown[822]: fetched user config from "gcp" Aug 13 00:41:52.512117 ignition[822]: GET result: OK Aug 13 00:41:52.523686 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:41:52.512231 ignition[822]: parsing config with SHA512: 8b284ccdb906f9b875b979a6bf8150dd396957116b3d9c307a494e8858bc7f4f32451fe27f2fe7fdf11ee4ae9938c4f963fc5a428176117117577ff58b374e55 Aug 13 00:41:52.526764 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:41:52.520722 ignition[822]: fetch: fetch complete Aug 13 00:41:52.520729 ignition[822]: fetch: fetch passed Aug 13 00:41:52.520792 ignition[822]: Ignition finished successfully Aug 13 00:41:52.566993 ignition[829]: Ignition 2.21.0 Aug 13 00:41:52.567014 ignition[829]: Stage: kargs Aug 13 00:41:52.568436 ignition[829]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:52.572525 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:41:52.568455 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:41:52.578014 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:41:52.570279 ignition[829]: kargs: kargs passed Aug 13 00:41:52.570363 ignition[829]: Ignition finished successfully Aug 13 00:41:52.614471 ignition[836]: Ignition 2.21.0 Aug 13 00:41:52.614488 ignition[836]: Stage: disks Aug 13 00:41:52.614703 ignition[836]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:52.619799 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:41:52.614720 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:41:52.624093 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:41:52.617699 ignition[836]: disks: disks passed Aug 13 00:41:52.629415 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:41:52.617804 ignition[836]: Ignition finished successfully Aug 13 00:41:52.633434 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:41:52.637387 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:41:52.641412 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:41:52.647115 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:41:52.687370 systemd-fsck[845]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Aug 13 00:41:52.701613 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:41:52.708589 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:41:52.904281 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:41:52.905446 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:41:52.909013 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:41:52.914195 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:41:52.932323 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:41:52.936955 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:41:52.937046 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:41:52.949327 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (853) Aug 13 00:41:52.937091 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:41:52.953436 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:41:52.953483 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:41:52.953506 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:41:52.960461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:41:52.960830 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:41:52.969916 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:41:53.109144 initrd-setup-root[877]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:41:53.118271 initrd-setup-root[884]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:41:53.125037 initrd-setup-root[891]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:41:53.131546 initrd-setup-root[898]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:41:53.299366 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:41:53.302477 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:41:53.317354 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:41:53.329281 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:41:53.331151 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:41:53.374269 ignition[965]: INFO : Ignition 2.21.0 Aug 13 00:41:53.374269 ignition[965]: INFO : Stage: mount Aug 13 00:41:53.378532 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:53.378532 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:41:53.378532 ignition[965]: INFO : mount: mount passed Aug 13 00:41:53.378532 ignition[965]: INFO : Ignition finished successfully Aug 13 00:41:53.380757 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:41:53.381830 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:41:53.388507 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:41:53.410067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:41:53.437447 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (977) Aug 13 00:41:53.437514 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:41:53.439447 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:41:53.439488 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:41:53.447465 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:41:53.487038 ignition[994]: INFO : Ignition 2.21.0 Aug 13 00:41:53.487038 ignition[994]: INFO : Stage: files Aug 13 00:41:53.494411 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:53.494411 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:41:53.494411 ignition[994]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:41:53.494411 ignition[994]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:41:53.494411 ignition[994]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:41:53.508366 ignition[994]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:41:53.508366 ignition[994]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:41:53.508366 ignition[994]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:41:53.508366 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:41:53.508366 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 00:41:53.496913 unknown[994]: wrote ssh authorized keys file for user: core Aug 13 00:41:53.608212 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:41:53.735935 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:41:53.740437 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:41:53.740437 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:41:53.740437 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:41:53.740437 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:41:53.740437 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:41:53.740437 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:41:53.740437 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:41:53.740437 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:41:53.772415 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:41:53.772415 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:41:53.772415 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:41:53.772415 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:41:53.772415 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:41:53.772415 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 00:41:54.140122 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:41:54.186448 systemd-networkd[812]: eth0: Gained IPv6LL Aug 13 00:41:56.528529 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:41:56.528529 ignition[994]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:41:56.536417 ignition[994]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:41:56.536417 ignition[994]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:41:56.536417 ignition[994]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:41:56.536417 ignition[994]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:41:56.536417 ignition[994]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:41:56.536417 ignition[994]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:41:56.536417 ignition[994]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:41:56.536417 ignition[994]: INFO : files: files passed Aug 13 00:41:56.536417 ignition[994]: INFO : Ignition finished successfully Aug 13 00:41:56.536735 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:41:56.544448 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:41:56.560624 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:41:56.569883 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:41:56.570042 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:41:56.593439 initrd-setup-root-after-ignition[1024]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:41:56.593439 initrd-setup-root-after-ignition[1024]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:41:56.602514 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:41:56.599588 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:41:56.605680 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:41:56.612689 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:41:56.683860 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:41:56.684029 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:41:56.689019 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:41:56.691565 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:41:56.695758 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:41:56.698004 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:41:56.727535 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:41:56.729849 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:41:56.759167 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:41:56.763544 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:41:56.763877 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:41:56.771639 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:41:56.771897 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:41:56.779774 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:41:56.782983 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:41:56.787071 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:41:56.791072 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:41:56.795091 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:41:56.798964 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:41:56.802973 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:41:56.807104 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:41:56.811108 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:41:56.818616 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:41:56.824710 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:41:56.826279 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:41:56.826978 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:41:56.833866 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:41:56.836981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:41:56.840844 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:41:56.841262 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:41:56.845894 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:41:56.846518 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:41:56.854274 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:41:56.854919 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:41:56.856900 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:41:56.857111 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:41:56.865136 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:41:56.877619 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:41:56.884614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:41:56.884911 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:41:56.892615 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:41:56.892858 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:41:56.908813 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:41:56.909991 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:41:56.910159 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:41:56.920624 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:41:56.924230 ignition[1048]: INFO : Ignition 2.21.0 Aug 13 00:41:56.924230 ignition[1048]: INFO : Stage: umount Aug 13 00:41:56.924230 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:41:56.924230 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 13 00:41:56.924230 ignition[1048]: INFO : umount: umount passed Aug 13 00:41:56.924230 ignition[1048]: INFO : Ignition finished successfully Aug 13 00:41:56.920891 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:41:56.930463 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:41:56.930670 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:41:56.936137 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:41:56.936211 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:41:56.939638 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:41:56.939829 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:41:56.942761 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:41:56.942847 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:41:56.946768 systemd[1]: Stopped target network.target - Network. Aug 13 00:41:56.950605 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:41:56.950801 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:41:56.956478 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:41:56.961400 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:41:56.965356 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:41:56.968385 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:41:56.972437 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:41:56.976505 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:41:56.976597 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:41:56.979089 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:41:56.979164 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:41:56.981626 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:41:56.981726 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:41:56.985729 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:41:56.985826 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:41:56.992438 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:41:56.992543 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:41:56.995899 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:41:57.002520 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:41:57.003375 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:41:57.003548 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:41:57.012634 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:41:57.012963 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:41:57.013081 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:41:57.017835 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:41:57.018945 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:41:57.024558 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:41:57.024618 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:41:57.032702 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:41:57.040337 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:41:57.040443 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:41:57.044445 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:41:57.044528 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:41:57.046120 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:41:57.046196 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:41:57.058418 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:41:57.058509 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:41:57.064674 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:41:57.073875 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:41:57.073963 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:41:57.076855 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:41:57.077125 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:41:57.090343 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:41:57.090465 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:41:57.099564 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:41:57.099623 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:41:57.105532 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:41:57.105609 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:41:57.116501 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:41:57.116588 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:41:57.125343 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:41:57.125450 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:41:57.138785 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:41:57.146348 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:41:57.146570 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:41:57.155673 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:41:57.155778 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:41:57.171636 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:41:57.171740 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:41:57.178638 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:41:57.178735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:41:57.185408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:41:57.185494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:41:57.193331 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 00:41:57.193401 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 00:41:57.193446 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:41:57.193491 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:41:57.194018 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:41:57.194129 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:41:57.281415 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Aug 13 00:41:57.198754 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:41:57.198876 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:41:57.203323 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:41:57.211970 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:41:57.248575 systemd[1]: Switching root. Aug 13 00:41:57.294380 systemd-journald[207]: Journal stopped Aug 13 00:41:59.403800 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:41:59.403895 kernel: SELinux: policy capability open_perms=1 Aug 13 00:41:59.403918 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:41:59.403938 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:41:59.403957 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:41:59.403976 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:41:59.404001 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:41:59.404020 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:41:59.404038 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:41:59.404067 kernel: audit: type=1403 audit(1755045717.901:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:41:59.404101 systemd[1]: Successfully loaded SELinux policy in 56.535ms. Aug 13 00:41:59.404126 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.889ms. Aug 13 00:41:59.404150 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:41:59.404179 systemd[1]: Detected virtualization google. Aug 13 00:41:59.404201 systemd[1]: Detected architecture x86-64. Aug 13 00:41:59.404221 systemd[1]: Detected first boot. Aug 13 00:41:59.404268 systemd[1]: Initializing machine ID from random generator. Aug 13 00:41:59.404290 zram_generator::config[1093]: No configuration found. Aug 13 00:41:59.404318 kernel: Guest personality initialized and is inactive Aug 13 00:41:59.404346 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:41:59.404368 kernel: Initialized host personality Aug 13 00:41:59.404389 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:41:59.404412 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:41:59.404435 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:41:59.404457 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:41:59.404518 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:41:59.404543 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:41:59.404567 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:41:59.404589 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:41:59.404614 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:41:59.404637 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:41:59.404660 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:41:59.404688 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:41:59.404712 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:41:59.404734 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:41:59.404757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:41:59.404781 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:41:59.404803 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:41:59.404828 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:41:59.404852 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:41:59.404883 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:41:59.404912 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:41:59.404937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:41:59.404960 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:41:59.404984 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:41:59.405008 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:41:59.405032 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:41:59.405055 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:41:59.405084 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:41:59.405108 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:41:59.405132 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:41:59.405156 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:41:59.405178 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:41:59.405202 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:41:59.405226 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:41:59.405288 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:41:59.405312 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:41:59.405333 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:41:59.405354 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:41:59.405375 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:41:59.405398 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:41:59.405423 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:41:59.405446 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:41:59.405478 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:41:59.405502 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:41:59.405525 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:41:59.405549 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:41:59.405571 systemd[1]: Reached target machines.target - Containers. Aug 13 00:41:59.405593 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:41:59.405619 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:41:59.405641 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:41:59.405663 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:41:59.405684 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:41:59.405704 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:41:59.405724 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:41:59.405745 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:41:59.405766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:41:59.405788 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:41:59.405813 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:41:59.405834 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:41:59.405855 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:41:59.405876 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:41:59.405899 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:41:59.405920 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:41:59.405941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:41:59.405963 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:41:59.405988 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:41:59.406010 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:41:59.406033 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:41:59.406054 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:41:59.406084 systemd[1]: Stopped verity-setup.service. Aug 13 00:41:59.406106 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:41:59.406128 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:41:59.406150 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:41:59.406175 kernel: fuse: init (API version 7.41) Aug 13 00:41:59.406196 kernel: loop: module loaded Aug 13 00:41:59.406216 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:41:59.406238 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:41:59.408566 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:41:59.408597 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:41:59.408620 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:41:59.408642 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:41:59.408665 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:41:59.408694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:41:59.408716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:41:59.408738 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:41:59.408760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:41:59.408782 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:41:59.408803 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:41:59.408825 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:41:59.408848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:41:59.408873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:41:59.408895 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:41:59.408917 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:41:59.408940 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:41:59.408962 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:41:59.408985 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:41:59.409008 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:41:59.409043 kernel: ACPI: bus type drm_connector registered Aug 13 00:41:59.409065 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:41:59.409145 systemd-journald[1165]: Collecting audit messages is disabled. Aug 13 00:41:59.409200 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:41:59.409228 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:41:59.410312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:41:59.410354 systemd-journald[1165]: Journal started Aug 13 00:41:59.410405 systemd-journald[1165]: Runtime Journal (/run/log/journal/a098467fe3904e1f9dbe17edeba02e02) is 8M, max 148.9M, 140.9M free. Aug 13 00:41:58.808431 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:41:58.818062 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:41:58.818673 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:41:59.417211 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:41:59.421342 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:41:59.432121 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:41:59.432201 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:41:59.440386 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:41:59.452275 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:41:59.462350 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:41:59.471326 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:41:59.488814 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:41:59.490647 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:41:59.495639 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:41:59.503946 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:41:59.510539 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:41:59.525972 kernel: loop0: detected capacity change from 0 to 52072 Aug 13 00:41:59.552637 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:41:59.571206 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:41:59.576099 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:41:59.584647 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:41:59.590450 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:41:59.624279 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:41:59.625486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:41:59.646951 systemd-journald[1165]: Time spent on flushing to /var/log/journal/a098467fe3904e1f9dbe17edeba02e02 is 38.596ms for 961 entries. Aug 13 00:41:59.646951 systemd-journald[1165]: System Journal (/var/log/journal/a098467fe3904e1f9dbe17edeba02e02) is 8M, max 584.8M, 576.8M free. Aug 13 00:41:59.706634 systemd-journald[1165]: Received client request to flush runtime journal. Aug 13 00:41:59.706722 kernel: loop1: detected capacity change from 0 to 229808 Aug 13 00:41:59.681807 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:41:59.705405 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Aug 13 00:41:59.705434 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Aug 13 00:41:59.709890 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:41:59.720185 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:41:59.731104 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:41:59.739021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:41:59.786400 kernel: loop2: detected capacity change from 0 to 113872 Aug 13 00:41:59.821056 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:41:59.842724 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:41:59.848497 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:41:59.857278 kernel: loop3: detected capacity change from 0 to 146240 Aug 13 00:41:59.929543 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Aug 13 00:41:59.929577 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Aug 13 00:41:59.944204 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:41:59.983374 kernel: loop4: detected capacity change from 0 to 52072 Aug 13 00:42:00.022562 kernel: loop5: detected capacity change from 0 to 229808 Aug 13 00:42:00.060301 kernel: loop6: detected capacity change from 0 to 113872 Aug 13 00:42:00.100031 kernel: loop7: detected capacity change from 0 to 146240 Aug 13 00:42:00.154618 (sd-merge)[1241]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Aug 13 00:42:00.156900 (sd-merge)[1241]: Merged extensions into '/usr'. Aug 13 00:42:00.168850 systemd[1]: Reload requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:42:00.168917 systemd[1]: Reloading... Aug 13 00:42:00.342343 zram_generator::config[1263]: No configuration found. Aug 13 00:42:00.608660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:42:00.666062 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:42:00.824981 systemd[1]: Reloading finished in 652 ms. Aug 13 00:42:00.844567 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:42:00.854320 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:42:00.879541 systemd[1]: Starting ensure-sysext.service... Aug 13 00:42:00.889172 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:42:00.926214 systemd[1]: Reload requested from client PID 1307 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:42:00.926259 systemd[1]: Reloading... Aug 13 00:42:00.960848 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:42:00.960905 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:42:00.962629 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:42:00.963119 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:42:00.969357 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:42:00.969883 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Aug 13 00:42:00.970013 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Aug 13 00:42:00.985458 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:42:00.985479 systemd-tmpfiles[1308]: Skipping /boot Aug 13 00:42:01.031709 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:42:01.032318 systemd-tmpfiles[1308]: Skipping /boot Aug 13 00:42:01.096348 zram_generator::config[1335]: No configuration found. Aug 13 00:42:01.237933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:42:01.353219 systemd[1]: Reloading finished in 426 ms. Aug 13 00:42:01.379155 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:42:01.408020 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:42:01.427649 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:42:01.442039 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:42:01.455230 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:42:01.474076 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:42:01.489397 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:42:01.503550 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:42:01.522624 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:42:01.522971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:42:01.526229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:42:01.542409 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:42:01.564707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:42:01.573524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:42:01.573942 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:42:01.580075 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:42:01.589443 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:42:01.593171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:42:01.594561 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:42:01.604351 systemd-udevd[1389]: Using default interface naming scheme 'v255'. Aug 13 00:42:01.606751 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:42:01.609442 augenrules[1406]: No rules Aug 13 00:42:01.618523 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:42:01.619362 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:42:01.629892 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:42:01.630172 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:42:01.640161 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:42:01.640489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:42:01.673862 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:42:01.674713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:42:01.679731 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:42:01.697381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:42:01.711522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:42:01.720535 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:42:01.720927 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:42:01.725916 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:42:01.735413 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:42:01.748639 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:42:01.759281 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:42:01.773332 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:42:01.786810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:42:01.787104 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:42:01.798578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:42:01.799802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:42:01.814031 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:42:01.825350 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:42:01.827317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:42:01.842312 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:42:01.916323 systemd[1]: Finished ensure-sysext.service. Aug 13 00:42:01.925392 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Aug 13 00:42:01.925952 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Aug 13 00:42:01.935502 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:42:01.939739 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:42:01.947656 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:42:01.953913 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:42:01.964503 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:42:01.976594 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:42:01.990086 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:42:02.002310 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 00:42:02.010937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:42:02.011020 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:42:02.016577 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:42:02.025488 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:42:02.025755 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:42:02.025799 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:42:02.027541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:42:02.033874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:42:02.058136 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:42:02.076918 augenrules[1463]: /sbin/augenrules: No change Aug 13 00:42:02.088005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:42:02.091173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:42:02.102149 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:42:02.104635 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:42:02.109090 augenrules[1492]: No rules Aug 13 00:42:02.113948 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:42:02.115026 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:42:02.137547 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:42:02.137879 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:42:02.147860 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 00:42:02.150364 systemd-resolved[1386]: Positive Trust Anchors: Aug 13 00:42:02.150765 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:42:02.150845 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:42:02.175676 systemd-resolved[1386]: Defaulting to hostname 'linux'. Aug 13 00:42:02.186156 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Aug 13 00:42:02.196436 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:42:02.217537 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:42:02.227118 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:42:02.238430 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:42:02.247619 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:42:02.258474 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:42:02.269450 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:42:02.281665 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:42:02.290606 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:42:02.301406 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:42:02.316638 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:42:02.319398 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:42:02.319469 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:42:02.327422 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:42:02.338050 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:42:02.349552 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:42:02.361412 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:42:02.372687 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:42:02.383416 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:42:02.392985 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:42:02.404875 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Aug 13 00:42:02.415694 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:42:02.427612 systemd-networkd[1477]: lo: Link UP Aug 13 00:42:02.428052 systemd-networkd[1477]: lo: Gained carrier Aug 13 00:42:02.432182 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:42:02.434481 systemd-networkd[1477]: Enumeration completed Aug 13 00:42:02.436527 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:42:02.436542 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:42:02.437460 systemd-networkd[1477]: eth0: Link UP Aug 13 00:42:02.437564 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:42:02.438576 systemd-networkd[1477]: eth0: Gained carrier Aug 13 00:42:02.440988 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:42:02.451330 systemd-networkd[1477]: eth0: DHCPv4 address 10.128.0.26/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 13 00:42:02.472274 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 13 00:42:02.484808 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 13 00:42:02.485285 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 13 00:42:02.499916 systemd[1]: Reached target network.target - Network. Aug 13 00:42:02.512327 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:42:02.528406 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:42:02.536451 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:42:02.545450 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:42:02.553550 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:42:02.554394 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:42:02.558515 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:42:02.570750 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:42:02.587506 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:42:02.599562 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:42:02.612390 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:42:02.633662 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:42:02.651789 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:42:02.655421 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:42:02.657565 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Aug 13 00:42:02.657624 kernel: ACPI: button: Sleep Button [SLPF] Aug 13 00:42:02.660295 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:42:02.672153 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:42:02.687197 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 00:42:02.692573 jq[1538]: false Aug 13 00:42:02.698635 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:42:02.719105 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:42:02.732567 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:42:02.750054 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:42:02.776054 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:42:02.806698 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:42:02.829165 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:42:02.840733 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Aug 13 00:42:02.844668 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:42:02.864999 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:42:02.877529 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:42:02.905396 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:42:02.916064 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:42:02.917701 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:42:02.920539 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:42:02.920865 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:42:02.936922 extend-filesystems[1539]: Found /dev/sda6 Aug 13 00:42:02.930754 oslogin_cache_refresh[1540]: Refreshing passwd entry cache Aug 13 00:42:02.939486 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing passwd entry cache Aug 13 00:42:02.975959 oslogin_cache_refresh[1540]: Failure getting users, quitting Aug 13 00:42:02.977227 jq[1561]: true Aug 13 00:42:02.977524 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting users, quitting Aug 13 00:42:02.977524 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:42:02.977524 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing group entry cache Aug 13 00:42:02.975989 oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:42:02.976060 oslogin_cache_refresh[1540]: Refreshing group entry cache Aug 13 00:42:02.981218 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting groups, quitting Aug 13 00:42:02.981218 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:42:02.979182 oslogin_cache_refresh[1540]: Failure getting groups, quitting Aug 13 00:42:02.979201 oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:42:02.992000 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:42:02.993663 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:42:03.004673 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:42:03.005597 coreos-metadata[1535]: Aug 13 00:42:03.005 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Aug 13 00:42:03.009929 extend-filesystems[1539]: Found /dev/sda9 Aug 13 00:42:03.027434 coreos-metadata[1535]: Aug 13 00:42:03.011 INFO Fetch successful Aug 13 00:42:03.027434 coreos-metadata[1535]: Aug 13 00:42:03.012 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Aug 13 00:42:03.027434 coreos-metadata[1535]: Aug 13 00:42:03.016 INFO Fetch successful Aug 13 00:42:03.027434 coreos-metadata[1535]: Aug 13 00:42:03.019 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Aug 13 00:42:03.027434 coreos-metadata[1535]: Aug 13 00:42:03.022 INFO Fetch successful Aug 13 00:42:03.027434 coreos-metadata[1535]: Aug 13 00:42:03.022 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Aug 13 00:42:03.027434 coreos-metadata[1535]: Aug 13 00:42:03.023 INFO Fetch successful Aug 13 00:42:03.042695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:42:03.044537 extend-filesystems[1539]: Checking size of /dev/sda9 Aug 13 00:42:03.043854 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:42:03.115288 update_engine[1560]: I20250813 00:42:03.106212 1560 main.cc:92] Flatcar Update Engine starting Aug 13 00:42:03.112187 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:42:03.123567 extend-filesystems[1539]: Resized partition /dev/sda9 Aug 13 00:42:03.116230 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:42:03.143341 extend-filesystems[1595]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 00:42:03.190356 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Aug 13 00:42:03.190417 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Aug 13 00:42:03.190505 jq[1579]: true Aug 13 00:42:03.195522 extend-filesystems[1595]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 00:42:03.195522 extend-filesystems[1595]: old_desc_blocks = 1, new_desc_blocks = 2 Aug 13 00:42:03.195522 extend-filesystems[1595]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Aug 13 00:42:03.253618 extend-filesystems[1539]: Resized filesystem in /dev/sda9 Aug 13 00:42:03.262448 tar[1563]: linux-amd64/LICENSE Aug 13 00:42:03.262448 tar[1563]: linux-amd64/helm Aug 13 00:42:03.197503 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:42:03.228008 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:42:03.230341 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:42:03.242369 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:42:03.303461 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:42:03.435019 ntpd[1543]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 20:57:09 UTC 2025 (1): Starting Aug 13 00:42:03.440773 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 20:57:09 UTC 2025 (1): Starting Aug 13 00:42:03.440773 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:42:03.440773 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: ---------------------------------------------------- Aug 13 00:42:03.440773 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:42:03.440773 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:42:03.440773 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: corporation. Support and training for ntp-4 are Aug 13 00:42:03.440773 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: available at https://www.nwtime.org/support Aug 13 00:42:03.440773 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: ---------------------------------------------------- Aug 13 00:42:03.435060 ntpd[1543]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:42:03.435076 ntpd[1543]: ---------------------------------------------------- Aug 13 00:42:03.435091 ntpd[1543]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:42:03.435113 ntpd[1543]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:42:03.435127 ntpd[1543]: corporation. Support and training for ntp-4 are Aug 13 00:42:03.435141 ntpd[1543]: available at https://www.nwtime.org/support Aug 13 00:42:03.435155 ntpd[1543]: ---------------------------------------------------- Aug 13 00:42:03.458138 ntpd[1543]: proto: precision = 0.113 usec (-23) Aug 13 00:42:03.462867 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: proto: precision = 0.113 usec (-23) Aug 13 00:42:03.462867 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: basedate set to 2025-07-31 Aug 13 00:42:03.462867 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: gps base set to 2025-08-03 (week 2378) Aug 13 00:42:03.461629 ntpd[1543]: basedate set to 2025-07-31 Aug 13 00:42:03.461654 ntpd[1543]: gps base set to 2025-08-03 (week 2378) Aug 13 00:42:03.484596 ntpd[1543]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:42:03.491944 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:42:03.491944 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:42:03.491944 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:42:03.491944 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: Listen normally on 3 eth0 10.128.0.26:123 Aug 13 00:42:03.491944 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: Listen normally on 4 lo [::1]:123 Aug 13 00:42:03.491944 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: bind(21) AF_INET6 fe80::4001:aff:fe80:1a%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:42:03.491944 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1a%2#123 Aug 13 00:42:03.491944 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: failed to init interface for address fe80::4001:aff:fe80:1a%2 Aug 13 00:42:03.491944 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: Listening on routing socket on fd #21 for interface updates Aug 13 00:42:03.484668 ntpd[1543]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:42:03.484888 ntpd[1543]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:42:03.484937 ntpd[1543]: Listen normally on 3 eth0 10.128.0.26:123 Aug 13 00:42:03.484992 ntpd[1543]: Listen normally on 4 lo [::1]:123 Aug 13 00:42:03.485049 ntpd[1543]: bind(21) AF_INET6 fe80::4001:aff:fe80:1a%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:42:03.485077 ntpd[1543]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1a%2#123 Aug 13 00:42:03.485096 ntpd[1543]: failed to init interface for address fe80::4001:aff:fe80:1a%2 Aug 13 00:42:03.485135 ntpd[1543]: Listening on routing socket on fd #21 for interface updates Aug 13 00:42:03.513854 ntpd[1543]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:42:03.517449 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:42:03.517449 ntpd[1543]: 13 Aug 00:42:03 ntpd[1543]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:42:03.513903 ntpd[1543]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:42:03.535536 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:42:03.541556 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:42:03.542171 dbus-daemon[1536]: [system] SELinux support is enabled Aug 13 00:42:03.543606 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:42:03.556805 systemd[1]: Starting sshkeys.service... Aug 13 00:42:03.556951 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:42:03.556990 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:42:03.557085 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:42:03.557107 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:42:03.582146 dbus-daemon[1536]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1477 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:42:03.610451 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:42:03.610875 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:42:03.615276 update_engine[1560]: I20250813 00:42:03.613914 1560 update_check_scheduler.cc:74] Next update check in 2m43s Aug 13 00:42:03.654463 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:42:03.690496 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:42:03.691850 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:42:03.705402 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:42:03.733783 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:42:03.834024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:42:03.888275 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:42:03.896320 dbus-daemon[1536]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:42:03.903790 dbus-daemon[1536]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1630 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:42:03.915535 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:42:03.919549 coreos-metadata[1635]: Aug 13 00:42:03.919 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Aug 13 00:42:03.928879 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:42:03.929797 coreos-metadata[1635]: Aug 13 00:42:03.929 INFO Fetch failed with 404: resource not found Aug 13 00:42:03.929797 coreos-metadata[1635]: Aug 13 00:42:03.929 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Aug 13 00:42:03.930177 coreos-metadata[1635]: Aug 13 00:42:03.930 INFO Fetch successful Aug 13 00:42:03.930177 coreos-metadata[1635]: Aug 13 00:42:03.930 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Aug 13 00:42:03.931436 coreos-metadata[1635]: Aug 13 00:42:03.931 INFO Fetch failed with 404: resource not found Aug 13 00:42:03.931436 coreos-metadata[1635]: Aug 13 00:42:03.931 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Aug 13 00:42:03.932773 coreos-metadata[1635]: Aug 13 00:42:03.932 INFO Fetch failed with 404: resource not found Aug 13 00:42:03.932773 coreos-metadata[1635]: Aug 13 00:42:03.932 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Aug 13 00:42:03.934028 systemd-logind[1553]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:42:03.934944 coreos-metadata[1635]: Aug 13 00:42:03.934 INFO Fetch successful Aug 13 00:42:03.934062 systemd-logind[1553]: Watching system buttons on /dev/input/event3 (Sleep Button) Aug 13 00:42:03.934099 systemd-logind[1553]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:42:03.940941 systemd-logind[1553]: New seat seat0. Aug 13 00:42:03.942143 unknown[1635]: wrote ssh authorized keys file for user: core Aug 13 00:42:03.943587 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:42:03.961397 systemd[1]: Started sshd@0-10.128.0.26:22-139.178.68.195:48040.service - OpenSSH per-connection server daemon (139.178.68.195:48040). Aug 13 00:42:03.974080 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:42:03.976015 locksmithd[1631]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:42:04.049292 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:42:04.049635 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:42:04.063474 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:42:04.077966 update-ssh-keys[1655]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:42:04.077791 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:42:04.098914 systemd[1]: Finished sshkeys.service. Aug 13 00:42:04.154682 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:42:04.167725 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:42:04.180799 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:42:04.191726 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:42:04.248517 containerd[1580]: time="2025-08-13T00:42:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:42:04.251293 containerd[1580]: time="2025-08-13T00:42:04.251218092Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:42:04.296289 containerd[1580]: time="2025-08-13T00:42:04.295514944Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.175µs" Aug 13 00:42:04.296289 containerd[1580]: time="2025-08-13T00:42:04.295566453Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:42:04.296289 containerd[1580]: time="2025-08-13T00:42:04.295596427Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:42:04.296289 containerd[1580]: time="2025-08-13T00:42:04.295816187Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:42:04.296289 containerd[1580]: time="2025-08-13T00:42:04.295840493Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:42:04.296289 containerd[1580]: time="2025-08-13T00:42:04.295897184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:42:04.296289 containerd[1580]: time="2025-08-13T00:42:04.295986146Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:42:04.296289 containerd[1580]: time="2025-08-13T00:42:04.296002449Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:42:04.297529 containerd[1580]: time="2025-08-13T00:42:04.296838882Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:42:04.297529 containerd[1580]: time="2025-08-13T00:42:04.296885895Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:42:04.297529 containerd[1580]: time="2025-08-13T00:42:04.296913087Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:42:04.297529 containerd[1580]: time="2025-08-13T00:42:04.296926380Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:42:04.297529 containerd[1580]: time="2025-08-13T00:42:04.297072454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:42:04.297529 containerd[1580]: time="2025-08-13T00:42:04.297414696Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:42:04.297529 containerd[1580]: time="2025-08-13T00:42:04.297463409Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:42:04.297529 containerd[1580]: time="2025-08-13T00:42:04.297480715Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:42:04.301352 containerd[1580]: time="2025-08-13T00:42:04.300316714Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:42:04.301352 containerd[1580]: time="2025-08-13T00:42:04.300905236Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:42:04.301352 containerd[1580]: time="2025-08-13T00:42:04.301015119Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.308855222Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309035385Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309073436Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309095924Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309116288Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309134685Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309153392Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309173195Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309192429Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309209027Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309227321Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309284212Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309476621Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:42:04.310276 containerd[1580]: time="2025-08-13T00:42:04.309508966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309533941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309553232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309571461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309589172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309615197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309635420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309655757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309673706Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309692253Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309776592Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309799275Z" level=info msg="Start snapshots syncer" Aug 13 00:42:04.310923 containerd[1580]: time="2025-08-13T00:42:04.309836698Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:42:04.311425 containerd[1580]: time="2025-08-13T00:42:04.310185625Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:42:04.314322 containerd[1580]: time="2025-08-13T00:42:04.313525728Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:42:04.314322 containerd[1580]: time="2025-08-13T00:42:04.313782695Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:42:04.314322 containerd[1580]: time="2025-08-13T00:42:04.314031396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:42:04.314322 containerd[1580]: time="2025-08-13T00:42:04.314084880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:42:04.314322 containerd[1580]: time="2025-08-13T00:42:04.314104718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:42:04.314322 containerd[1580]: time="2025-08-13T00:42:04.314122779Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:42:04.314322 containerd[1580]: time="2025-08-13T00:42:04.314166008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:42:04.314322 containerd[1580]: time="2025-08-13T00:42:04.314185657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:42:04.314322 containerd[1580]: time="2025-08-13T00:42:04.314204568Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:42:04.314322 containerd[1580]: time="2025-08-13T00:42:04.314277280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:42:04.315097 containerd[1580]: time="2025-08-13T00:42:04.314298429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:42:04.315097 containerd[1580]: time="2025-08-13T00:42:04.314865490Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:42:04.315097 containerd[1580]: time="2025-08-13T00:42:04.314941837Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:42:04.315097 containerd[1580]: time="2025-08-13T00:42:04.315020910Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:42:04.315097 containerd[1580]: time="2025-08-13T00:42:04.315038948Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:42:04.315097 containerd[1580]: time="2025-08-13T00:42:04.315059112Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:42:04.316772 containerd[1580]: time="2025-08-13T00:42:04.315073938Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:42:04.316772 containerd[1580]: time="2025-08-13T00:42:04.316376041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:42:04.316772 containerd[1580]: time="2025-08-13T00:42:04.316427510Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:42:04.316772 containerd[1580]: time="2025-08-13T00:42:04.316457411Z" level=info msg="runtime interface created" Aug 13 00:42:04.316772 containerd[1580]: time="2025-08-13T00:42:04.316467275Z" level=info msg="created NRI interface" Aug 13 00:42:04.316772 containerd[1580]: time="2025-08-13T00:42:04.316481668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:42:04.316772 containerd[1580]: time="2025-08-13T00:42:04.316523287Z" level=info msg="Connect containerd service" Aug 13 00:42:04.317636 containerd[1580]: time="2025-08-13T00:42:04.316569753Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:42:04.320547 containerd[1580]: time="2025-08-13T00:42:04.320234092Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:42:04.327807 polkitd[1651]: Started polkitd version 126 Aug 13 00:42:04.338191 polkitd[1651]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:42:04.339230 polkitd[1651]: Loading rules from directory /run/polkit-1/rules.d Aug 13 00:42:04.339453 polkitd[1651]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 00:42:04.340203 polkitd[1651]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 00:42:04.340384 polkitd[1651]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 00:42:04.340654 polkitd[1651]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:42:04.344682 polkitd[1651]: Finished loading, compiling and executing 2 rules Aug 13 00:42:04.345871 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:42:04.346594 dbus-daemon[1536]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:42:04.347440 polkitd[1651]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:42:04.364483 systemd-networkd[1477]: eth0: Gained IPv6LL Aug 13 00:42:04.375109 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:42:04.386705 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:42:04.402279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:42:04.414918 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:42:04.429462 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Aug 13 00:42:04.453992 systemd-hostnamed[1630]: Hostname set to (transient) Aug 13 00:42:04.455049 systemd-resolved[1386]: System hostname changed to 'ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal'. Aug 13 00:42:04.522431 init.sh[1683]: + '[' -e /etc/default/instance_configs.cfg.template ']' Aug 13 00:42:04.523084 init.sh[1683]: + echo -e '[InstanceSetup]\nset_host_keys = false' Aug 13 00:42:04.525671 init.sh[1683]: + /usr/bin/google_instance_setup Aug 13 00:42:04.555943 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:42:04.567804 sshd[1654]: Accepted publickey for core from 139.178.68.195 port 48040 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:42:04.569945 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:04.590930 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:42:04.604280 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:42:04.647327 systemd-logind[1553]: New session 1 of user core. Aug 13 00:42:04.681345 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:42:04.700140 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:42:04.771952 (systemd)[1702]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:42:04.784924 systemd-logind[1553]: New session c1 of user core. Aug 13 00:42:04.790523 containerd[1580]: time="2025-08-13T00:42:04.790472173Z" level=info msg="Start subscribing containerd event" Aug 13 00:42:04.790651 containerd[1580]: time="2025-08-13T00:42:04.790551929Z" level=info msg="Start recovering state" Aug 13 00:42:04.790701 containerd[1580]: time="2025-08-13T00:42:04.790680713Z" level=info msg="Start event monitor" Aug 13 00:42:04.790744 containerd[1580]: time="2025-08-13T00:42:04.790702830Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:42:04.790744 containerd[1580]: time="2025-08-13T00:42:04.790716677Z" level=info msg="Start streaming server" Aug 13 00:42:04.790847 containerd[1580]: time="2025-08-13T00:42:04.790741798Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:42:04.790847 containerd[1580]: time="2025-08-13T00:42:04.790753574Z" level=info msg="runtime interface starting up..." Aug 13 00:42:04.790847 containerd[1580]: time="2025-08-13T00:42:04.790764566Z" level=info msg="starting plugins..." Aug 13 00:42:04.790847 containerd[1580]: time="2025-08-13T00:42:04.790795644Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:42:04.797812 containerd[1580]: time="2025-08-13T00:42:04.797553451Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:42:04.800111 containerd[1580]: time="2025-08-13T00:42:04.800043262Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:42:04.808405 containerd[1580]: time="2025-08-13T00:42:04.808355506Z" level=info msg="containerd successfully booted in 0.560938s" Aug 13 00:42:04.808689 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:42:05.171362 tar[1563]: linux-amd64/README.md Aug 13 00:42:05.170989 systemd[1702]: Queued start job for default target default.target. Aug 13 00:42:05.176919 systemd[1702]: Created slice app.slice - User Application Slice. Aug 13 00:42:05.176969 systemd[1702]: Reached target paths.target - Paths. Aug 13 00:42:05.177045 systemd[1702]: Reached target timers.target - Timers. Aug 13 00:42:05.182323 systemd[1702]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:42:05.206866 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:42:05.215720 systemd[1702]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:42:05.219400 systemd[1702]: Reached target sockets.target - Sockets. Aug 13 00:42:05.219487 systemd[1702]: Reached target basic.target - Basic System. Aug 13 00:42:05.219556 systemd[1702]: Reached target default.target - Main User Target. Aug 13 00:42:05.219609 systemd[1702]: Startup finished in 407ms. Aug 13 00:42:05.219932 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:42:05.236509 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:42:05.483532 systemd[1]: Started sshd@1-10.128.0.26:22-139.178.68.195:48054.service - OpenSSH per-connection server daemon (139.178.68.195:48054). Aug 13 00:42:05.519288 instance-setup[1690]: INFO Running google_set_multiqueue. Aug 13 00:42:05.548096 instance-setup[1690]: INFO Set channels for eth0 to 2. Aug 13 00:42:05.553480 instance-setup[1690]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Aug 13 00:42:05.555570 instance-setup[1690]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Aug 13 00:42:05.556194 instance-setup[1690]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Aug 13 00:42:05.557913 instance-setup[1690]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Aug 13 00:42:05.559612 instance-setup[1690]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Aug 13 00:42:05.561827 instance-setup[1690]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Aug 13 00:42:05.561900 instance-setup[1690]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Aug 13 00:42:05.563755 instance-setup[1690]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Aug 13 00:42:05.572664 instance-setup[1690]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 13 00:42:05.577435 instance-setup[1690]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 13 00:42:05.579341 instance-setup[1690]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Aug 13 00:42:05.579879 instance-setup[1690]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Aug 13 00:42:05.607035 init.sh[1683]: + /usr/bin/google_metadata_script_runner --script-type startup Aug 13 00:42:05.785963 startup-script[1750]: INFO Starting startup scripts. Aug 13 00:42:05.792450 startup-script[1750]: INFO No startup scripts found in metadata. Aug 13 00:42:05.792531 startup-script[1750]: INFO Finished running startup scripts. Aug 13 00:42:05.815997 init.sh[1683]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Aug 13 00:42:05.815997 init.sh[1683]: + daemon_pids=() Aug 13 00:42:05.816176 init.sh[1683]: + for d in accounts clock_skew network Aug 13 00:42:05.816884 init.sh[1683]: + daemon_pids+=($!) Aug 13 00:42:05.816884 init.sh[1683]: + for d in accounts clock_skew network Aug 13 00:42:05.816884 init.sh[1683]: + daemon_pids+=($!) Aug 13 00:42:05.817059 init.sh[1683]: + for d in accounts clock_skew network Aug 13 00:42:05.817231 init.sh[1683]: + daemon_pids+=($!) Aug 13 00:42:05.817515 init.sh[1754]: + /usr/bin/google_clock_skew_daemon Aug 13 00:42:05.817906 init.sh[1755]: + /usr/bin/google_network_daemon Aug 13 00:42:05.819173 init.sh[1683]: + NOTIFY_SOCKET=/run/systemd/notify Aug 13 00:42:05.819173 init.sh[1683]: + /usr/bin/systemd-notify --ready Aug 13 00:42:05.819293 init.sh[1753]: + /usr/bin/google_accounts_daemon Aug 13 00:42:05.838401 systemd[1]: Started oem-gce.service - GCE Linux Agent. Aug 13 00:42:05.839757 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:05.842044 sshd[1721]: Accepted publickey for core from 139.178.68.195 port 48054 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:42:05.859286 init.sh[1683]: + wait -n 1753 1754 1755 Aug 13 00:42:05.864775 systemd-logind[1553]: New session 2 of user core. Aug 13 00:42:05.867709 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:42:06.072640 sshd[1757]: Connection closed by 139.178.68.195 port 48054 Aug 13 00:42:06.074585 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:06.087649 systemd[1]: sshd@1-10.128.0.26:22-139.178.68.195:48054.service: Deactivated successfully. Aug 13 00:42:06.089330 systemd-logind[1553]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:42:06.092175 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:42:06.100955 systemd-logind[1553]: Removed session 2. Aug 13 00:42:06.136589 systemd[1]: Started sshd@2-10.128.0.26:22-139.178.68.195:48070.service - OpenSSH per-connection server daemon (139.178.68.195:48070). Aug 13 00:42:06.258910 google-clock-skew[1754]: INFO Starting Google Clock Skew daemon. Aug 13 00:42:06.271527 google-clock-skew[1754]: INFO Clock drift token has changed: 0. Aug 13 00:42:06.323773 google-networking[1755]: INFO Starting Google Networking daemon. Aug 13 00:42:06.357337 groupadd[1772]: group added to /etc/group: name=google-sudoers, GID=1000 Aug 13 00:42:06.360992 groupadd[1772]: group added to /etc/gshadow: name=google-sudoers Aug 13 00:42:06.413610 groupadd[1772]: new group: name=google-sudoers, GID=1000 Aug 13 00:42:06.446686 ntpd[1543]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1a%2]:123 Aug 13 00:42:06.447169 ntpd[1543]: 13 Aug 00:42:06 ntpd[1543]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1a%2]:123 Aug 13 00:42:06.447847 google-accounts[1753]: INFO Starting Google Accounts daemon. Aug 13 00:42:06.460449 google-accounts[1753]: WARNING OS Login not installed. Aug 13 00:42:06.462649 google-accounts[1753]: INFO Creating a new user account for 0. Aug 13 00:42:06.468109 init.sh[1781]: useradd: invalid user name '0': use --badname to ignore Aug 13 00:42:06.468498 google-accounts[1753]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Aug 13 00:42:06.497148 sshd[1768]: Accepted publickey for core from 139.178.68.195 port 48070 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:42:06.498019 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:06.505517 systemd-logind[1553]: New session 3 of user core. Aug 13 00:42:07.000402 systemd-resolved[1386]: Clock change detected. Flushing caches. Aug 13 00:42:07.001146 google-clock-skew[1754]: INFO Synced system time with hardware clock. Aug 13 00:42:07.001963 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:42:07.203004 sshd[1783]: Connection closed by 139.178.68.195 port 48070 Aug 13 00:42:07.203046 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:07.212217 systemd[1]: sshd@2-10.128.0.26:22-139.178.68.195:48070.service: Deactivated successfully. Aug 13 00:42:07.216408 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:42:07.223165 systemd-logind[1553]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:42:07.228159 systemd-logind[1553]: Removed session 3. Aug 13 00:42:07.246306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:42:07.257668 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:42:07.267623 systemd[1]: Startup finished in 4.326s (kernel) + 9.073s (initrd) + 8.929s (userspace) = 22.329s. Aug 13 00:42:07.269456 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:42:08.212669 kubelet[1793]: E0813 00:42:08.212595 1793 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:42:08.215604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:42:08.215886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:42:08.216565 systemd[1]: kubelet.service: Consumed 1.331s CPU time, 267.6M memory peak. Aug 13 00:42:17.260866 systemd[1]: Started sshd@3-10.128.0.26:22-139.178.68.195:42774.service - OpenSSH per-connection server daemon (139.178.68.195:42774). Aug 13 00:42:17.564358 sshd[1805]: Accepted publickey for core from 139.178.68.195 port 42774 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:42:17.566240 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:17.573767 systemd-logind[1553]: New session 4 of user core. Aug 13 00:42:17.580939 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:42:17.777199 sshd[1807]: Connection closed by 139.178.68.195 port 42774 Aug 13 00:42:17.778131 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:17.784225 systemd[1]: sshd@3-10.128.0.26:22-139.178.68.195:42774.service: Deactivated successfully. Aug 13 00:42:17.786596 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:42:17.787813 systemd-logind[1553]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:42:17.789897 systemd-logind[1553]: Removed session 4. Aug 13 00:42:17.831235 systemd[1]: Started sshd@4-10.128.0.26:22-139.178.68.195:42786.service - OpenSSH per-connection server daemon (139.178.68.195:42786). Aug 13 00:42:18.328635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:42:18.333971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:42:18.390260 sshd[1813]: Accepted publickey for core from 139.178.68.195 port 42786 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:42:18.392020 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:18.400022 systemd-logind[1553]: New session 5 of user core. Aug 13 00:42:18.403932 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:42:18.599409 sshd[1818]: Connection closed by 139.178.68.195 port 42786 Aug 13 00:42:18.600242 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:18.606630 systemd[1]: sshd@4-10.128.0.26:22-139.178.68.195:42786.service: Deactivated successfully. Aug 13 00:42:18.608991 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:42:18.610365 systemd-logind[1553]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:42:18.612736 systemd-logind[1553]: Removed session 5. Aug 13 00:42:18.656261 systemd[1]: Started sshd@5-10.128.0.26:22-139.178.68.195:42802.service - OpenSSH per-connection server daemon (139.178.68.195:42802). Aug 13 00:42:18.702120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:42:18.719259 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:42:18.771018 kubelet[1831]: E0813 00:42:18.770953 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:42:18.776721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:42:18.776967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:42:18.778096 systemd[1]: kubelet.service: Consumed 214ms CPU time, 110.1M memory peak. Aug 13 00:42:18.964492 sshd[1824]: Accepted publickey for core from 139.178.68.195 port 42802 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:42:18.966607 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:18.974475 systemd-logind[1553]: New session 6 of user core. Aug 13 00:42:18.983957 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:42:19.175452 sshd[1838]: Connection closed by 139.178.68.195 port 42802 Aug 13 00:42:19.176334 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:19.181854 systemd[1]: sshd@5-10.128.0.26:22-139.178.68.195:42802.service: Deactivated successfully. Aug 13 00:42:19.184204 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:42:19.185444 systemd-logind[1553]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:42:19.187579 systemd-logind[1553]: Removed session 6. Aug 13 00:42:19.231458 systemd[1]: Started sshd@6-10.128.0.26:22-139.178.68.195:42810.service - OpenSSH per-connection server daemon (139.178.68.195:42810). Aug 13 00:42:19.539680 sshd[1844]: Accepted publickey for core from 139.178.68.195 port 42810 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:42:19.541395 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:19.549045 systemd-logind[1553]: New session 7 of user core. Aug 13 00:42:19.555952 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:42:19.736295 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:42:19.736827 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:42:19.751649 sudo[1847]: pam_unix(sudo:session): session closed for user root Aug 13 00:42:19.795482 sshd[1846]: Connection closed by 139.178.68.195 port 42810 Aug 13 00:42:19.796987 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:19.802366 systemd[1]: sshd@6-10.128.0.26:22-139.178.68.195:42810.service: Deactivated successfully. Aug 13 00:42:19.804984 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:42:19.807009 systemd-logind[1553]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:42:19.809509 systemd-logind[1553]: Removed session 7. Aug 13 00:42:19.859352 systemd[1]: Started sshd@7-10.128.0.26:22-139.178.68.195:42812.service - OpenSSH per-connection server daemon (139.178.68.195:42812). Aug 13 00:42:20.163556 sshd[1853]: Accepted publickey for core from 139.178.68.195 port 42812 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:42:20.165595 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:20.172993 systemd-logind[1553]: New session 8 of user core. Aug 13 00:42:20.179891 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:42:20.341503 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:42:20.342034 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:42:20.349452 sudo[1857]: pam_unix(sudo:session): session closed for user root Aug 13 00:42:20.363711 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:42:20.364282 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:42:20.377301 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:42:20.425561 augenrules[1879]: No rules Aug 13 00:42:20.427641 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:42:20.428034 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:42:20.430126 sudo[1856]: pam_unix(sudo:session): session closed for user root Aug 13 00:42:20.472175 sshd[1855]: Connection closed by 139.178.68.195 port 42812 Aug 13 00:42:20.473082 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:20.478037 systemd[1]: sshd@7-10.128.0.26:22-139.178.68.195:42812.service: Deactivated successfully. Aug 13 00:42:20.480486 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:42:20.482477 systemd-logind[1553]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:42:20.484563 systemd-logind[1553]: Removed session 8. Aug 13 00:42:20.527538 systemd[1]: Started sshd@8-10.128.0.26:22-139.178.68.195:39300.service - OpenSSH per-connection server daemon (139.178.68.195:39300). Aug 13 00:42:20.838485 sshd[1888]: Accepted publickey for core from 139.178.68.195 port 39300 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:42:20.839940 sshd-session[1888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:20.847329 systemd-logind[1553]: New session 9 of user core. Aug 13 00:42:20.854998 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:42:21.016466 sudo[1891]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:42:21.016963 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:42:21.503049 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:42:21.525409 (dockerd)[1908]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:42:21.851800 dockerd[1908]: time="2025-08-13T00:42:21.851471610Z" level=info msg="Starting up" Aug 13 00:42:21.856725 dockerd[1908]: time="2025-08-13T00:42:21.856430284Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:42:21.924726 dockerd[1908]: time="2025-08-13T00:42:21.924651375Z" level=info msg="Loading containers: start." Aug 13 00:42:21.944727 kernel: Initializing XFRM netlink socket Aug 13 00:42:22.276474 systemd-networkd[1477]: docker0: Link UP Aug 13 00:42:22.282674 dockerd[1908]: time="2025-08-13T00:42:22.282598495Z" level=info msg="Loading containers: done." Aug 13 00:42:22.302663 dockerd[1908]: time="2025-08-13T00:42:22.302557428Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:42:22.302918 dockerd[1908]: time="2025-08-13T00:42:22.302680666Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:42:22.302918 dockerd[1908]: time="2025-08-13T00:42:22.302854593Z" level=info msg="Initializing buildkit" Aug 13 00:42:22.304995 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck513398404-merged.mount: Deactivated successfully. Aug 13 00:42:22.337483 dockerd[1908]: time="2025-08-13T00:42:22.337401468Z" level=info msg="Completed buildkit initialization" Aug 13 00:42:22.346621 dockerd[1908]: time="2025-08-13T00:42:22.346537009Z" level=info msg="Daemon has completed initialization" Aug 13 00:42:22.346970 dockerd[1908]: time="2025-08-13T00:42:22.346648881Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:42:22.346956 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:42:23.213404 containerd[1580]: time="2025-08-13T00:42:23.213351538Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 00:42:24.488267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010668607.mount: Deactivated successfully. Aug 13 00:42:27.700278 containerd[1580]: time="2025-08-13T00:42:27.700206446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:27.701710 containerd[1580]: time="2025-08-13T00:42:27.701639634Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30084865" Aug 13 00:42:27.703209 containerd[1580]: time="2025-08-13T00:42:27.703138683Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:27.706650 containerd[1580]: time="2025-08-13T00:42:27.706569468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:27.708084 containerd[1580]: time="2025-08-13T00:42:27.707861675Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 4.494456326s" Aug 13 00:42:27.708084 containerd[1580]: time="2025-08-13T00:42:27.707913110Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 00:42:27.709491 containerd[1580]: time="2025-08-13T00:42:27.709431519Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 00:42:28.860911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:42:28.863730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:42:29.272954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:42:29.283766 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:42:29.345849 kubelet[2170]: E0813 00:42:29.345799 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:42:29.349308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:42:29.349554 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:42:29.350214 systemd[1]: kubelet.service: Consumed 224ms CPU time, 109.7M memory peak. Aug 13 00:42:30.221075 containerd[1580]: time="2025-08-13T00:42:30.221000592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:30.222609 containerd[1580]: time="2025-08-13T00:42:30.222550279Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26021295" Aug 13 00:42:30.224547 containerd[1580]: time="2025-08-13T00:42:30.224492200Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:30.229712 containerd[1580]: time="2025-08-13T00:42:30.229550354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:30.233624 containerd[1580]: time="2025-08-13T00:42:30.233434946Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 2.523956157s" Aug 13 00:42:30.233624 containerd[1580]: time="2025-08-13T00:42:30.233565059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 00:42:30.234343 containerd[1580]: time="2025-08-13T00:42:30.234158728Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 00:42:32.690021 containerd[1580]: time="2025-08-13T00:42:32.689891844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:32.692016 containerd[1580]: time="2025-08-13T00:42:32.691940301Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20156929" Aug 13 00:42:32.693814 containerd[1580]: time="2025-08-13T00:42:32.693725745Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:32.697731 containerd[1580]: time="2025-08-13T00:42:32.697437458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:32.699216 containerd[1580]: time="2025-08-13T00:42:32.698772222Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 2.464574958s" Aug 13 00:42:32.699216 containerd[1580]: time="2025-08-13T00:42:32.698820869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 00:42:32.699661 containerd[1580]: time="2025-08-13T00:42:32.699629696Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 00:42:34.328605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322274182.mount: Deactivated successfully. Aug 13 00:42:34.981867 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:42:35.064718 containerd[1580]: time="2025-08-13T00:42:35.064584870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:35.066346 containerd[1580]: time="2025-08-13T00:42:35.066265251Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31894561" Aug 13 00:42:35.068338 containerd[1580]: time="2025-08-13T00:42:35.068258395Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:35.071888 containerd[1580]: time="2025-08-13T00:42:35.071805025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:35.073177 containerd[1580]: time="2025-08-13T00:42:35.072940171Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 2.37313456s" Aug 13 00:42:35.073177 containerd[1580]: time="2025-08-13T00:42:35.072989435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 00:42:35.074027 containerd[1580]: time="2025-08-13T00:42:35.073989511Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 00:42:35.760137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516598771.mount: Deactivated successfully. Aug 13 00:42:37.511136 containerd[1580]: time="2025-08-13T00:42:37.511065974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:37.512612 containerd[1580]: time="2025-08-13T00:42:37.512552542Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20948880" Aug 13 00:42:37.514128 containerd[1580]: time="2025-08-13T00:42:37.514059042Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:37.517348 containerd[1580]: time="2025-08-13T00:42:37.517275846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:37.518829 containerd[1580]: time="2025-08-13T00:42:37.518649127Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.444616743s" Aug 13 00:42:37.518829 containerd[1580]: time="2025-08-13T00:42:37.518712963Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 00:42:37.519980 containerd[1580]: time="2025-08-13T00:42:37.519933009Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:42:38.249620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884124133.mount: Deactivated successfully. Aug 13 00:42:38.257919 containerd[1580]: time="2025-08-13T00:42:38.257846310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:42:38.259217 containerd[1580]: time="2025-08-13T00:42:38.259156199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Aug 13 00:42:38.260796 containerd[1580]: time="2025-08-13T00:42:38.260744371Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:42:38.263962 containerd[1580]: time="2025-08-13T00:42:38.263905607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:42:38.265518 containerd[1580]: time="2025-08-13T00:42:38.265472457Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 745.503304ms" Aug 13 00:42:38.265518 containerd[1580]: time="2025-08-13T00:42:38.265515682Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:42:38.266438 containerd[1580]: time="2025-08-13T00:42:38.266394004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 00:42:38.902858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291235472.mount: Deactivated successfully. Aug 13 00:42:39.361590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:42:39.368081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:42:39.932199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:42:39.946607 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:42:40.069309 kubelet[2302]: E0813 00:42:40.069214 2302 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:42:40.074048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:42:40.074414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:42:40.075223 systemd[1]: kubelet.service: Consumed 261ms CPU time, 110.6M memory peak. Aug 13 00:42:41.610493 containerd[1580]: time="2025-08-13T00:42:41.610412325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:41.612102 containerd[1580]: time="2025-08-13T00:42:41.612038865Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58251906" Aug 13 00:42:41.613807 containerd[1580]: time="2025-08-13T00:42:41.613651117Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:41.617784 containerd[1580]: time="2025-08-13T00:42:41.617726373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:42:41.619461 containerd[1580]: time="2025-08-13T00:42:41.619258639Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.352808568s" Aug 13 00:42:41.619461 containerd[1580]: time="2025-08-13T00:42:41.619309285Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 00:42:46.586640 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:42:46.587085 systemd[1]: kubelet.service: Consumed 261ms CPU time, 110.6M memory peak. Aug 13 00:42:46.590724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:42:46.641130 systemd[1]: Reload requested from client PID 2353 ('systemctl') (unit session-9.scope)... Aug 13 00:42:46.641155 systemd[1]: Reloading... Aug 13 00:42:46.831798 zram_generator::config[2397]: No configuration found. Aug 13 00:42:46.999539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:42:47.172930 systemd[1]: Reloading finished in 531 ms. Aug 13 00:42:47.242591 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:42:47.242782 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:42:47.243192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:42:47.243280 systemd[1]: kubelet.service: Consumed 173ms CPU time, 98.3M memory peak. Aug 13 00:42:47.246887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:42:47.587635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:42:47.602639 (kubelet)[2449]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:42:47.659726 kubelet[2449]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:42:47.659726 kubelet[2449]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:42:47.659726 kubelet[2449]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:42:47.659726 kubelet[2449]: I0813 00:42:47.658443 2449 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:42:48.716280 kubelet[2449]: I0813 00:42:48.716225 2449 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:42:48.716280 kubelet[2449]: I0813 00:42:48.716272 2449 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:42:48.717020 kubelet[2449]: I0813 00:42:48.716884 2449 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:42:48.784423 kubelet[2449]: E0813 00:42:48.784372 2449 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:42:48.784836 kubelet[2449]: I0813 00:42:48.784568 2449 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:42:48.795174 kubelet[2449]: I0813 00:42:48.795119 2449 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:42:48.800936 kubelet[2449]: I0813 00:42:48.800872 2449 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:42:48.801360 kubelet[2449]: I0813 00:42:48.801284 2449 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:42:48.801591 kubelet[2449]: I0813 00:42:48.801338 2449 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:42:48.801820 kubelet[2449]: I0813 00:42:48.801594 2449 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:42:48.801820 kubelet[2449]: I0813 00:42:48.801612 2449 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:42:48.803389 kubelet[2449]: I0813 00:42:48.803349 2449 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:42:48.806854 kubelet[2449]: I0813 00:42:48.806828 2449 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:42:48.806854 kubelet[2449]: I0813 00:42:48.806856 2449 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:42:48.807013 kubelet[2449]: I0813 00:42:48.806895 2449 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:42:48.807013 kubelet[2449]: I0813 00:42:48.806921 2449 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:42:48.816294 kubelet[2449]: E0813 00:42:48.815468 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:42:48.816294 kubelet[2449]: E0813 00:42:48.815608 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:42:48.816747 kubelet[2449]: I0813 00:42:48.816717 2449 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:42:48.817584 kubelet[2449]: I0813 00:42:48.817554 2449 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:42:48.818835 kubelet[2449]: W0813 00:42:48.818811 2449 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:42:48.838588 kubelet[2449]: I0813 00:42:48.838553 2449 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:42:48.838795 kubelet[2449]: I0813 00:42:48.838741 2449 server.go:1289] "Started kubelet" Aug 13 00:42:48.841066 kubelet[2449]: I0813 00:42:48.840995 2449 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:42:48.846374 kubelet[2449]: I0813 00:42:48.846271 2449 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:42:48.847007 kubelet[2449]: I0813 00:42:48.846976 2449 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:42:48.847920 kubelet[2449]: I0813 00:42:48.847894 2449 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:42:48.853045 kubelet[2449]: I0813 00:42:48.853012 2449 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:42:48.857644 kubelet[2449]: I0813 00:42:48.857599 2449 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:42:48.859782 kubelet[2449]: I0813 00:42:48.859720 2449 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:42:48.860515 kubelet[2449]: E0813 00:42:48.860133 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" not found" Aug 13 00:42:48.862577 kubelet[2449]: I0813 00:42:48.861111 2449 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:42:48.862577 kubelet[2449]: I0813 00:42:48.861217 2449 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:42:48.862903 kubelet[2449]: E0813 00:42:48.859380 2449 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal.185b2cd4002e3f32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,UID:ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,},FirstTimestamp:2025-08-13 00:42:48.83865989 +0000 UTC m=+1.229154943,LastTimestamp:2025-08-13 00:42:48.83865989 +0000 UTC m=+1.229154943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,}" Aug 13 00:42:48.863874 kubelet[2449]: E0813 00:42:48.863828 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:42:48.864158 kubelet[2449]: E0813 00:42:48.864125 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="200ms" Aug 13 00:42:48.864511 kubelet[2449]: I0813 00:42:48.864488 2449 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:42:48.864748 kubelet[2449]: I0813 00:42:48.864724 2449 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:42:48.868756 kubelet[2449]: I0813 00:42:48.868343 2449 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:42:48.891885 kubelet[2449]: I0813 00:42:48.891827 2449 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:42:48.899458 kubelet[2449]: I0813 00:42:48.899418 2449 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:42:48.899616 kubelet[2449]: I0813 00:42:48.899581 2449 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:42:48.899616 kubelet[2449]: I0813 00:42:48.899616 2449 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:42:48.899780 kubelet[2449]: I0813 00:42:48.899627 2449 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:42:48.899929 kubelet[2449]: E0813 00:42:48.899882 2449 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:42:48.901912 kubelet[2449]: E0813 00:42:48.901878 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:42:48.910730 kubelet[2449]: I0813 00:42:48.910303 2449 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:42:48.910730 kubelet[2449]: I0813 00:42:48.910338 2449 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:42:48.910730 kubelet[2449]: I0813 00:42:48.910363 2449 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:42:48.913573 kubelet[2449]: I0813 00:42:48.913533 2449 policy_none.go:49] "None policy: Start" Aug 13 00:42:48.913573 kubelet[2449]: I0813 00:42:48.913564 2449 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:42:48.913573 kubelet[2449]: I0813 00:42:48.913582 2449 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:42:48.922601 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:42:48.934983 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:42:48.937805 update_engine[1560]: I20250813 00:42:48.937741 1560 update_attempter.cc:509] Updating boot flags... Aug 13 00:42:48.942949 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:42:48.954568 kubelet[2449]: E0813 00:42:48.954342 2449 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:42:48.956386 kubelet[2449]: I0813 00:42:48.955418 2449 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:42:48.956386 kubelet[2449]: I0813 00:42:48.955462 2449 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:42:48.956386 kubelet[2449]: I0813 00:42:48.956198 2449 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:42:48.963600 kubelet[2449]: E0813 00:42:48.962760 2449 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:42:48.963964 kubelet[2449]: E0813 00:42:48.963940 2449 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" not found" Aug 13 00:42:49.034089 systemd[1]: Created slice kubepods-burstable-pod872887847dda88055268f8077f5e62f0.slice - libcontainer container kubepods-burstable-pod872887847dda88055268f8077f5e62f0.slice. Aug 13 00:42:49.068284 kubelet[2449]: E0813 00:42:49.068229 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="400ms" Aug 13 00:42:49.073733 kubelet[2449]: I0813 00:42:49.073027 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.076213 kubelet[2449]: E0813 00:42:49.076155 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.104729 kubelet[2449]: E0813 00:42:49.101329 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" not found" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.165732 kubelet[2449]: I0813 00:42:49.164768 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81327b1cb113e774589a4270f5c7bebe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"81327b1cb113e774589a4270f5c7bebe\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.165732 kubelet[2449]: I0813 00:42:49.164846 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f77e1682d79b6d1cabda5050efc9db0-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"9f77e1682d79b6d1cabda5050efc9db0\") " pod="kube-system/kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.165732 kubelet[2449]: I0813 00:42:49.164878 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/872887847dda88055268f8077f5e62f0-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"872887847dda88055268f8077f5e62f0\") " pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.165732 kubelet[2449]: I0813 00:42:49.164923 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/872887847dda88055268f8077f5e62f0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"872887847dda88055268f8077f5e62f0\") " pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.166062 kubelet[2449]: I0813 00:42:49.164952 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81327b1cb113e774589a4270f5c7bebe-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"81327b1cb113e774589a4270f5c7bebe\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.166062 kubelet[2449]: I0813 00:42:49.164986 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81327b1cb113e774589a4270f5c7bebe-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"81327b1cb113e774589a4270f5c7bebe\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.166062 kubelet[2449]: I0813 00:42:49.165020 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/872887847dda88055268f8077f5e62f0-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"872887847dda88055268f8077f5e62f0\") " pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.166062 kubelet[2449]: I0813 00:42:49.165047 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/81327b1cb113e774589a4270f5c7bebe-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"81327b1cb113e774589a4270f5c7bebe\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.166270 kubelet[2449]: I0813 00:42:49.165078 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81327b1cb113e774589a4270f5c7bebe-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"81327b1cb113e774589a4270f5c7bebe\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.285432 kubelet[2449]: I0813 00:42:49.284341 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.286038 kubelet[2449]: E0813 00:42:49.285988 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.331392 systemd[1]: Created slice kubepods-burstable-pod81327b1cb113e774589a4270f5c7bebe.slice - libcontainer container kubepods-burstable-pod81327b1cb113e774589a4270f5c7bebe.slice. Aug 13 00:42:49.399088 kubelet[2449]: E0813 00:42:49.399033 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" not found" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.404714 containerd[1580]: time="2025-08-13T00:42:49.404587652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,Uid:81327b1cb113e774589a4270f5c7bebe,Namespace:kube-system,Attempt:0,}" Aug 13 00:42:49.405305 systemd[1]: Created slice kubepods-burstable-pod9f77e1682d79b6d1cabda5050efc9db0.slice - libcontainer container kubepods-burstable-pod9f77e1682d79b6d1cabda5050efc9db0.slice. Aug 13 00:42:49.409827 containerd[1580]: time="2025-08-13T00:42:49.409610664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,Uid:872887847dda88055268f8077f5e62f0,Namespace:kube-system,Attempt:0,}" Aug 13 00:42:49.421966 kubelet[2449]: E0813 00:42:49.421854 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" not found" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.425800 containerd[1580]: time="2025-08-13T00:42:49.425154454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,Uid:9f77e1682d79b6d1cabda5050efc9db0,Namespace:kube-system,Attempt:0,}" Aug 13 00:42:49.469880 kubelet[2449]: E0813 00:42:49.469505 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="800ms" Aug 13 00:42:49.544160 containerd[1580]: time="2025-08-13T00:42:49.540000626Z" level=info msg="connecting to shim 9ed6761006b089cd2b96321b53d204bd01d7e1caa0832c51599130c25cd9cafe" address="unix:///run/containerd/s/d25bc12398b329b8f11d95cb0bc737ecc34252791f24ee7f53aefad01000b773" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:42:49.597555 containerd[1580]: time="2025-08-13T00:42:49.597501278Z" level=info msg="connecting to shim fb3ce947d63f519a9821c1ecbb8880ee61f4909697373a5fcce4aa9e7b91aa2f" address="unix:///run/containerd/s/b5ee6289a2189300ad823c1da2e878e1357d3cd304a8ee82bd32276e9889dca5" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:42:49.656433 systemd[1]: Started cri-containerd-9ed6761006b089cd2b96321b53d204bd01d7e1caa0832c51599130c25cd9cafe.scope - libcontainer container 9ed6761006b089cd2b96321b53d204bd01d7e1caa0832c51599130c25cd9cafe. Aug 13 00:42:49.678477 systemd[1]: Started cri-containerd-fb3ce947d63f519a9821c1ecbb8880ee61f4909697373a5fcce4aa9e7b91aa2f.scope - libcontainer container fb3ce947d63f519a9821c1ecbb8880ee61f4909697373a5fcce4aa9e7b91aa2f. Aug 13 00:42:49.693800 kubelet[2449]: I0813 00:42:49.693762 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.694676 kubelet[2449]: E0813 00:42:49.694625 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:49.740045 kubelet[2449]: E0813 00:42:49.739467 2449 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal.185b2cd4002e3f32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,UID:ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,},FirstTimestamp:2025-08-13 00:42:48.83865989 +0000 UTC m=+1.229154943,LastTimestamp:2025-08-13 00:42:48.83865989 +0000 UTC m=+1.229154943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,}" Aug 13 00:42:50.008394 containerd[1580]: time="2025-08-13T00:42:50.008309007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,Uid:872887847dda88055268f8077f5e62f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb3ce947d63f519a9821c1ecbb8880ee61f4909697373a5fcce4aa9e7b91aa2f\"" Aug 13 00:42:50.020543 kubelet[2449]: E0813 00:42:50.018936 2449 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-21291" Aug 13 00:42:50.028320 containerd[1580]: time="2025-08-13T00:42:50.027804924Z" level=info msg="connecting to shim 7f893a95dbde460d54005635e282154dcbde20c4be27635fc5dd2ac59844a94d" address="unix:///run/containerd/s/f75b1963450aab7046d04e89a6351539080ae98e3582f27d9b396a6ce0448fb4" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:42:50.059953 systemd[1]: Started cri-containerd-7f893a95dbde460d54005635e282154dcbde20c4be27635fc5dd2ac59844a94d.scope - libcontainer container 7f893a95dbde460d54005635e282154dcbde20c4be27635fc5dd2ac59844a94d. Aug 13 00:42:50.073594 containerd[1580]: time="2025-08-13T00:42:50.073539335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,Uid:81327b1cb113e774589a4270f5c7bebe,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ed6761006b089cd2b96321b53d204bd01d7e1caa0832c51599130c25cd9cafe\"" Aug 13 00:42:50.076165 containerd[1580]: time="2025-08-13T00:42:50.076104625Z" level=info msg="CreateContainer within sandbox \"fb3ce947d63f519a9821c1ecbb8880ee61f4909697373a5fcce4aa9e7b91aa2f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:42:50.079387 kubelet[2449]: E0813 00:42:50.079341 2449 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flat" Aug 13 00:42:50.086548 containerd[1580]: time="2025-08-13T00:42:50.086500277Z" level=info msg="CreateContainer within sandbox \"9ed6761006b089cd2b96321b53d204bd01d7e1caa0832c51599130c25cd9cafe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:42:50.094563 containerd[1580]: time="2025-08-13T00:42:50.094481793Z" level=info msg="Container 4e1e9cf5b6332bddf330beb2a1310f21217ad7ac4500a1f70f0bc923f332d1cb: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:42:50.116046 containerd[1580]: time="2025-08-13T00:42:50.115832990Z" level=info msg="Container 4765705ee9b8dc182f88f9dab5d25c562ea20180b6bc0212eec87cf3c6fdd587: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:42:50.118045 kubelet[2449]: E0813 00:42:50.117977 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:42:50.129332 containerd[1580]: time="2025-08-13T00:42:50.129149028Z" level=info msg="CreateContainer within sandbox \"fb3ce947d63f519a9821c1ecbb8880ee61f4909697373a5fcce4aa9e7b91aa2f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4e1e9cf5b6332bddf330beb2a1310f21217ad7ac4500a1f70f0bc923f332d1cb\"" Aug 13 00:42:50.131306 containerd[1580]: time="2025-08-13T00:42:50.131259914Z" level=info msg="StartContainer for \"4e1e9cf5b6332bddf330beb2a1310f21217ad7ac4500a1f70f0bc923f332d1cb\"" Aug 13 00:42:50.137035 containerd[1580]: time="2025-08-13T00:42:50.136443533Z" level=info msg="connecting to shim 4e1e9cf5b6332bddf330beb2a1310f21217ad7ac4500a1f70f0bc923f332d1cb" address="unix:///run/containerd/s/b5ee6289a2189300ad823c1da2e878e1357d3cd304a8ee82bd32276e9889dca5" protocol=ttrpc version=3 Aug 13 00:42:50.142847 containerd[1580]: time="2025-08-13T00:42:50.142778310Z" level=info msg="CreateContainer within sandbox \"9ed6761006b089cd2b96321b53d204bd01d7e1caa0832c51599130c25cd9cafe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4765705ee9b8dc182f88f9dab5d25c562ea20180b6bc0212eec87cf3c6fdd587\"" Aug 13 00:42:50.144712 containerd[1580]: time="2025-08-13T00:42:50.143815974Z" level=info msg="StartContainer for \"4765705ee9b8dc182f88f9dab5d25c562ea20180b6bc0212eec87cf3c6fdd587\"" Aug 13 00:42:50.149270 containerd[1580]: time="2025-08-13T00:42:50.149219429Z" level=info msg="connecting to shim 4765705ee9b8dc182f88f9dab5d25c562ea20180b6bc0212eec87cf3c6fdd587" address="unix:///run/containerd/s/d25bc12398b329b8f11d95cb0bc737ecc34252791f24ee7f53aefad01000b773" protocol=ttrpc version=3 Aug 13 00:42:50.163809 containerd[1580]: time="2025-08-13T00:42:50.163757222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal,Uid:9f77e1682d79b6d1cabda5050efc9db0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f893a95dbde460d54005635e282154dcbde20c4be27635fc5dd2ac59844a94d\"" Aug 13 00:42:50.167899 kubelet[2449]: E0813 00:42:50.167857 2449 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-21291" Aug 13 00:42:50.173408 containerd[1580]: time="2025-08-13T00:42:50.173355670Z" level=info msg="CreateContainer within sandbox \"7f893a95dbde460d54005635e282154dcbde20c4be27635fc5dd2ac59844a94d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:42:50.186284 systemd[1]: Started cri-containerd-4e1e9cf5b6332bddf330beb2a1310f21217ad7ac4500a1f70f0bc923f332d1cb.scope - libcontainer container 4e1e9cf5b6332bddf330beb2a1310f21217ad7ac4500a1f70f0bc923f332d1cb. Aug 13 00:42:50.195532 containerd[1580]: time="2025-08-13T00:42:50.195454296Z" level=info msg="Container 7c05911221ae2b1210182efb2f213eb71a7b771c73015db44a3e1e6704a4bcd3: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:42:50.219316 containerd[1580]: time="2025-08-13T00:42:50.219070444Z" level=info msg="CreateContainer within sandbox \"7f893a95dbde460d54005635e282154dcbde20c4be27635fc5dd2ac59844a94d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c05911221ae2b1210182efb2f213eb71a7b771c73015db44a3e1e6704a4bcd3\"" Aug 13 00:42:50.219149 systemd[1]: Started cri-containerd-4765705ee9b8dc182f88f9dab5d25c562ea20180b6bc0212eec87cf3c6fdd587.scope - libcontainer container 4765705ee9b8dc182f88f9dab5d25c562ea20180b6bc0212eec87cf3c6fdd587. Aug 13 00:42:50.223659 containerd[1580]: time="2025-08-13T00:42:50.223176090Z" level=info msg="StartContainer for \"7c05911221ae2b1210182efb2f213eb71a7b771c73015db44a3e1e6704a4bcd3\"" Aug 13 00:42:50.230742 kubelet[2449]: E0813 00:42:50.230497 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:42:50.232171 containerd[1580]: time="2025-08-13T00:42:50.231948042Z" level=info msg="connecting to shim 7c05911221ae2b1210182efb2f213eb71a7b771c73015db44a3e1e6704a4bcd3" address="unix:///run/containerd/s/f75b1963450aab7046d04e89a6351539080ae98e3582f27d9b396a6ce0448fb4" protocol=ttrpc version=3 Aug 13 00:42:50.267570 kubelet[2449]: E0813 00:42:50.267434 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:42:50.270000 systemd[1]: Started cri-containerd-7c05911221ae2b1210182efb2f213eb71a7b771c73015db44a3e1e6704a4bcd3.scope - libcontainer container 7c05911221ae2b1210182efb2f213eb71a7b771c73015db44a3e1e6704a4bcd3. Aug 13 00:42:50.271922 kubelet[2449]: E0813 00:42:50.271818 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="1.6s" Aug 13 00:42:50.344333 containerd[1580]: time="2025-08-13T00:42:50.343789058Z" level=info msg="StartContainer for \"4e1e9cf5b6332bddf330beb2a1310f21217ad7ac4500a1f70f0bc923f332d1cb\" returns successfully" Aug 13 00:42:50.354765 kubelet[2449]: E0813 00:42:50.354713 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:42:50.381202 containerd[1580]: time="2025-08-13T00:42:50.381145469Z" level=info msg="StartContainer for \"4765705ee9b8dc182f88f9dab5d25c562ea20180b6bc0212eec87cf3c6fdd587\" returns successfully" Aug 13 00:42:50.459994 containerd[1580]: time="2025-08-13T00:42:50.459843527Z" level=info msg="StartContainer for \"7c05911221ae2b1210182efb2f213eb71a7b771c73015db44a3e1e6704a4bcd3\" returns successfully" Aug 13 00:42:50.501662 kubelet[2449]: I0813 00:42:50.501531 2449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:50.918637 kubelet[2449]: E0813 00:42:50.918594 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" not found" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:50.925454 kubelet[2449]: E0813 00:42:50.925406 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" not found" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:50.927714 kubelet[2449]: E0813 00:42:50.927642 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" not found" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:51.933626 kubelet[2449]: E0813 00:42:51.933576 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" not found" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:51.934246 kubelet[2449]: E0813 00:42:51.933819 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" not found" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:53.286748 kubelet[2449]: I0813 00:42:53.285535 2449 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:53.286748 kubelet[2449]: E0813 00:42:53.285632 2449 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\": node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" not found" Aug 13 00:42:53.361588 kubelet[2449]: I0813 00:42:53.361325 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:53.433976 kubelet[2449]: E0813 00:42:53.433921 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Aug 13 00:42:53.446954 kubelet[2449]: E0813 00:42:53.446876 2449 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:53.446954 kubelet[2449]: I0813 00:42:53.446942 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:53.455050 kubelet[2449]: E0813 00:42:53.454991 2449 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:53.455050 kubelet[2449]: I0813 00:42:53.455037 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:53.460001 kubelet[2449]: E0813 00:42:53.459956 2449 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:53.816747 kubelet[2449]: I0813 00:42:53.816707 2449 apiserver.go:52] "Watching apiserver" Aug 13 00:42:53.862200 kubelet[2449]: I0813 00:42:53.862144 2449 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:42:54.293714 kubelet[2449]: I0813 00:42:54.293650 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:54.304718 kubelet[2449]: I0813 00:42:54.303588 2449 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Aug 13 00:42:55.854739 kubelet[2449]: I0813 00:42:55.852670 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:55.862094 kubelet[2449]: I0813 00:42:55.861725 2449 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Aug 13 00:42:56.683705 systemd[1]: Reload requested from client PID 2748 ('systemctl') (unit session-9.scope)... Aug 13 00:42:56.683727 systemd[1]: Reloading... Aug 13 00:42:56.868741 zram_generator::config[2793]: No configuration found. Aug 13 00:42:57.024585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:42:57.247762 systemd[1]: Reloading finished in 563 ms. Aug 13 00:42:57.295162 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:42:57.318831 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:42:57.319192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:42:57.319274 systemd[1]: kubelet.service: Consumed 1.858s CPU time, 134.3M memory peak. Aug 13 00:42:57.325630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:42:57.707778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:42:57.723956 (kubelet)[2840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:42:57.805139 kubelet[2840]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:42:57.805139 kubelet[2840]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:42:57.805139 kubelet[2840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:42:57.805139 kubelet[2840]: I0813 00:42:57.803518 2840 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:42:57.815970 kubelet[2840]: I0813 00:42:57.815926 2840 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:42:57.815970 kubelet[2840]: I0813 00:42:57.815965 2840 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:42:57.816906 kubelet[2840]: I0813 00:42:57.816343 2840 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:42:57.818844 kubelet[2840]: I0813 00:42:57.818708 2840 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 00:42:57.822576 kubelet[2840]: I0813 00:42:57.822517 2840 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:42:57.831738 kubelet[2840]: I0813 00:42:57.831643 2840 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:42:57.845218 kubelet[2840]: I0813 00:42:57.844634 2840 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:42:57.845218 kubelet[2840]: I0813 00:42:57.845013 2840 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:42:57.845505 kubelet[2840]: I0813 00:42:57.845050 2840 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:42:57.845505 kubelet[2840]: I0813 00:42:57.845480 2840 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:42:57.845505 kubelet[2840]: I0813 00:42:57.845498 2840 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:42:57.846087 kubelet[2840]: I0813 00:42:57.845569 2840 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:42:57.846155 kubelet[2840]: I0813 00:42:57.846102 2840 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:42:57.846155 kubelet[2840]: I0813 00:42:57.846126 2840 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:42:57.846349 kubelet[2840]: I0813 00:42:57.846160 2840 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:42:57.846349 kubelet[2840]: I0813 00:42:57.846184 2840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:42:57.854713 kubelet[2840]: I0813 00:42:57.852333 2840 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:42:57.854713 kubelet[2840]: I0813 00:42:57.854657 2840 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:42:57.893072 kubelet[2840]: I0813 00:42:57.893040 2840 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:42:57.893389 kubelet[2840]: I0813 00:42:57.893369 2840 server.go:1289] "Started kubelet" Aug 13 00:42:57.895856 kubelet[2840]: I0813 00:42:57.895825 2840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:42:57.906908 kubelet[2840]: I0813 00:42:57.906878 2840 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:42:57.907864 kubelet[2840]: I0813 00:42:57.907842 2840 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:42:57.908129 kubelet[2840]: I0813 00:42:57.906909 2840 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:42:57.909353 kubelet[2840]: I0813 00:42:57.909101 2840 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:42:57.909856 kubelet[2840]: I0813 00:42:57.909532 2840 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:42:57.911975 kubelet[2840]: I0813 00:42:57.911191 2840 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:42:57.914906 kubelet[2840]: I0813 00:42:57.914883 2840 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:42:57.917976 kubelet[2840]: I0813 00:42:57.917512 2840 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:42:57.924209 kubelet[2840]: I0813 00:42:57.923384 2840 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:42:57.924209 kubelet[2840]: I0813 00:42:57.923533 2840 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:42:57.938812 kubelet[2840]: I0813 00:42:57.938733 2840 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:42:57.939320 kubelet[2840]: I0813 00:42:57.939262 2840 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:42:57.942212 kubelet[2840]: I0813 00:42:57.941260 2840 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:42:57.942212 kubelet[2840]: I0813 00:42:57.941300 2840 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:42:57.942212 kubelet[2840]: I0813 00:42:57.941328 2840 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:42:57.942212 kubelet[2840]: I0813 00:42:57.941339 2840 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:42:57.942212 kubelet[2840]: E0813 00:42:57.941397 2840 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:42:57.949575 kubelet[2840]: E0813 00:42:57.949427 2840 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:42:58.041969 kubelet[2840]: E0813 00:42:58.041798 2840 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:42:58.046136 kubelet[2840]: I0813 00:42:58.045746 2840 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:42:58.046136 kubelet[2840]: I0813 00:42:58.045771 2840 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:42:58.046136 kubelet[2840]: I0813 00:42:58.045799 2840 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:42:58.046136 kubelet[2840]: I0813 00:42:58.045995 2840 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:42:58.046136 kubelet[2840]: I0813 00:42:58.046010 2840 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:42:58.046136 kubelet[2840]: I0813 00:42:58.046035 2840 policy_none.go:49] "None policy: Start" Aug 13 00:42:58.046136 kubelet[2840]: I0813 00:42:58.046048 2840 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:42:58.046136 kubelet[2840]: I0813 00:42:58.046063 2840 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:42:58.046738 kubelet[2840]: I0813 00:42:58.046613 2840 state_mem.go:75] "Updated machine memory state" Aug 13 00:42:58.057327 kubelet[2840]: E0813 00:42:58.056707 2840 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:42:58.057876 kubelet[2840]: I0813 00:42:58.057782 2840 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:42:58.058082 kubelet[2840]: I0813 00:42:58.058033 2840 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:42:58.059467 kubelet[2840]: I0813 00:42:58.059389 2840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:42:58.067468 kubelet[2840]: E0813 00:42:58.066163 2840 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:42:58.183300 kubelet[2840]: I0813 00:42:58.182879 2840 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.197608 kubelet[2840]: I0813 00:42:58.197501 2840 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.198129 kubelet[2840]: I0813 00:42:58.198099 2840 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.244843 kubelet[2840]: I0813 00:42:58.244002 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.244843 kubelet[2840]: I0813 00:42:58.244607 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.247041 kubelet[2840]: I0813 00:42:58.247005 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.255675 kubelet[2840]: I0813 00:42:58.255446 2840 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Aug 13 00:42:58.255675 kubelet[2840]: E0813 00:42:58.255522 2840 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.258375 kubelet[2840]: I0813 00:42:58.258316 2840 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Aug 13 00:42:58.259159 kubelet[2840]: I0813 00:42:58.259115 2840 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Aug 13 00:42:58.259278 kubelet[2840]: E0813 00:42:58.259196 2840 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.319397 kubelet[2840]: I0813 00:42:58.318858 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81327b1cb113e774589a4270f5c7bebe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"81327b1cb113e774589a4270f5c7bebe\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.319397 kubelet[2840]: I0813 00:42:58.318923 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f77e1682d79b6d1cabda5050efc9db0-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"9f77e1682d79b6d1cabda5050efc9db0\") " pod="kube-system/kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.319397 kubelet[2840]: I0813 00:42:58.318959 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/872887847dda88055268f8077f5e62f0-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"872887847dda88055268f8077f5e62f0\") " pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.319397 kubelet[2840]: I0813 00:42:58.318993 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/81327b1cb113e774589a4270f5c7bebe-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"81327b1cb113e774589a4270f5c7bebe\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.320160 kubelet[2840]: I0813 00:42:58.319056 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81327b1cb113e774589a4270f5c7bebe-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"81327b1cb113e774589a4270f5c7bebe\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.320160 kubelet[2840]: I0813 00:42:58.319088 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/872887847dda88055268f8077f5e62f0-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"872887847dda88055268f8077f5e62f0\") " pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.320160 kubelet[2840]: I0813 00:42:58.319120 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/872887847dda88055268f8077f5e62f0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"872887847dda88055268f8077f5e62f0\") " pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.320160 kubelet[2840]: I0813 00:42:58.319151 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81327b1cb113e774589a4270f5c7bebe-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"81327b1cb113e774589a4270f5c7bebe\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.320645 kubelet[2840]: I0813 00:42:58.319179 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81327b1cb113e774589a4270f5c7bebe-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" (UID: \"81327b1cb113e774589a4270f5c7bebe\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:58.848889 kubelet[2840]: I0813 00:42:58.848766 2840 apiserver.go:52] "Watching apiserver" Aug 13 00:42:58.908846 kubelet[2840]: I0813 00:42:58.908795 2840 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:42:59.010477 kubelet[2840]: I0813 00:42:59.009723 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:59.031253 kubelet[2840]: I0813 00:42:59.031172 2840 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Aug 13 00:42:59.031887 kubelet[2840]: E0813 00:42:59.031494 2840 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:42:59.070851 kubelet[2840]: I0813 00:42:59.070769 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" podStartSLOduration=4.07074312 podStartE2EDuration="4.07074312s" podCreationTimestamp="2025-08-13 00:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:42:59.070089307 +0000 UTC m=+1.337534177" watchObservedRunningTime="2025-08-13 00:42:59.07074312 +0000 UTC m=+1.338187993" Aug 13 00:42:59.106808 kubelet[2840]: I0813 00:42:59.106589 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" podStartSLOduration=5.10654992 podStartE2EDuration="5.10654992s" podCreationTimestamp="2025-08-13 00:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:42:59.090230168 +0000 UTC m=+1.357675027" watchObservedRunningTime="2025-08-13 00:42:59.10654992 +0000 UTC m=+1.373994785" Aug 13 00:42:59.143256 kubelet[2840]: I0813 00:42:59.143033 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" podStartSLOduration=1.143006556 podStartE2EDuration="1.143006556s" podCreationTimestamp="2025-08-13 00:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:42:59.108950171 +0000 UTC m=+1.376395032" watchObservedRunningTime="2025-08-13 00:42:59.143006556 +0000 UTC m=+1.410451429" Aug 13 00:43:00.384385 kubelet[2840]: I0813 00:43:00.384330 2840 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:43:00.385036 containerd[1580]: time="2025-08-13T00:43:00.384989059Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:43:00.385515 kubelet[2840]: I0813 00:43:00.385494 2840 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:43:01.312981 systemd[1]: Created slice kubepods-besteffort-podfb325fca_4134_4dd4_a068_7246c3c4850b.slice - libcontainer container kubepods-besteffort-podfb325fca_4134_4dd4_a068_7246c3c4850b.slice. Aug 13 00:43:01.341708 kubelet[2840]: I0813 00:43:01.341535 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz2xw\" (UniqueName: \"kubernetes.io/projected/fb325fca-4134-4dd4-a068-7246c3c4850b-kube-api-access-dz2xw\") pod \"kube-proxy-8gjv7\" (UID: \"fb325fca-4134-4dd4-a068-7246c3c4850b\") " pod="kube-system/kube-proxy-8gjv7" Aug 13 00:43:01.341929 kubelet[2840]: I0813 00:43:01.341750 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb325fca-4134-4dd4-a068-7246c3c4850b-lib-modules\") pod \"kube-proxy-8gjv7\" (UID: \"fb325fca-4134-4dd4-a068-7246c3c4850b\") " pod="kube-system/kube-proxy-8gjv7" Aug 13 00:43:01.341929 kubelet[2840]: I0813 00:43:01.341789 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb325fca-4134-4dd4-a068-7246c3c4850b-kube-proxy\") pod \"kube-proxy-8gjv7\" (UID: \"fb325fca-4134-4dd4-a068-7246c3c4850b\") " pod="kube-system/kube-proxy-8gjv7" Aug 13 00:43:01.341929 kubelet[2840]: I0813 00:43:01.341813 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb325fca-4134-4dd4-a068-7246c3c4850b-xtables-lock\") pod \"kube-proxy-8gjv7\" (UID: \"fb325fca-4134-4dd4-a068-7246c3c4850b\") " pod="kube-system/kube-proxy-8gjv7" Aug 13 00:43:01.435810 systemd[1]: Created slice kubepods-besteffort-pod59c0be2e_e469_4e3f_a89a_e4216172eca1.slice - libcontainer container kubepods-besteffort-pod59c0be2e_e469_4e3f_a89a_e4216172eca1.slice. Aug 13 00:43:01.443878 kubelet[2840]: I0813 00:43:01.442570 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7b2\" (UniqueName: \"kubernetes.io/projected/59c0be2e-e469-4e3f-a89a-e4216172eca1-kube-api-access-fn7b2\") pod \"tigera-operator-747864d56d-rdqfp\" (UID: \"59c0be2e-e469-4e3f-a89a-e4216172eca1\") " pod="tigera-operator/tigera-operator-747864d56d-rdqfp" Aug 13 00:43:01.444898 kubelet[2840]: I0813 00:43:01.444858 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/59c0be2e-e469-4e3f-a89a-e4216172eca1-var-lib-calico\") pod \"tigera-operator-747864d56d-rdqfp\" (UID: \"59c0be2e-e469-4e3f-a89a-e4216172eca1\") " pod="tigera-operator/tigera-operator-747864d56d-rdqfp" Aug 13 00:43:01.625876 containerd[1580]: time="2025-08-13T00:43:01.625776489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8gjv7,Uid:fb325fca-4134-4dd4-a068-7246c3c4850b,Namespace:kube-system,Attempt:0,}" Aug 13 00:43:01.662654 containerd[1580]: time="2025-08-13T00:43:01.662577744Z" level=info msg="connecting to shim 0cbf7007efb953e3d8c58f4bc5c53b6a076d05d82bd2969aaf0d5ca1e412130c" address="unix:///run/containerd/s/464687579e0c48ae4b28cb1b67b2e0f3e1da43622b913d7a482912a6b15bd540" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:01.703983 systemd[1]: Started cri-containerd-0cbf7007efb953e3d8c58f4bc5c53b6a076d05d82bd2969aaf0d5ca1e412130c.scope - libcontainer container 0cbf7007efb953e3d8c58f4bc5c53b6a076d05d82bd2969aaf0d5ca1e412130c. Aug 13 00:43:01.742206 containerd[1580]: time="2025-08-13T00:43:01.742120958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-rdqfp,Uid:59c0be2e-e469-4e3f-a89a-e4216172eca1,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:43:01.755218 containerd[1580]: time="2025-08-13T00:43:01.755140270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8gjv7,Uid:fb325fca-4134-4dd4-a068-7246c3c4850b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cbf7007efb953e3d8c58f4bc5c53b6a076d05d82bd2969aaf0d5ca1e412130c\"" Aug 13 00:43:01.761419 containerd[1580]: time="2025-08-13T00:43:01.761365702Z" level=info msg="CreateContainer within sandbox \"0cbf7007efb953e3d8c58f4bc5c53b6a076d05d82bd2969aaf0d5ca1e412130c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:43:01.782167 containerd[1580]: time="2025-08-13T00:43:01.782046456Z" level=info msg="Container 136819a251525cf0b0812ca30ec1968d987d191dc04757e0fb51d40e9f98e623: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:01.790076 containerd[1580]: time="2025-08-13T00:43:01.790010010Z" level=info msg="connecting to shim 48cf0186045cbd162423b5506c0ab78ccf760544222ca8338d8bc5efbbccd07d" address="unix:///run/containerd/s/4debb3f131b2dd695d0548153e1bd3174b590100e5e609103cf1096a95c2c969" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:01.801345 containerd[1580]: time="2025-08-13T00:43:01.801227448Z" level=info msg="CreateContainer within sandbox \"0cbf7007efb953e3d8c58f4bc5c53b6a076d05d82bd2969aaf0d5ca1e412130c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"136819a251525cf0b0812ca30ec1968d987d191dc04757e0fb51d40e9f98e623\"" Aug 13 00:43:01.802506 containerd[1580]: time="2025-08-13T00:43:01.802461208Z" level=info msg="StartContainer for \"136819a251525cf0b0812ca30ec1968d987d191dc04757e0fb51d40e9f98e623\"" Aug 13 00:43:01.805636 containerd[1580]: time="2025-08-13T00:43:01.805552459Z" level=info msg="connecting to shim 136819a251525cf0b0812ca30ec1968d987d191dc04757e0fb51d40e9f98e623" address="unix:///run/containerd/s/464687579e0c48ae4b28cb1b67b2e0f3e1da43622b913d7a482912a6b15bd540" protocol=ttrpc version=3 Aug 13 00:43:01.837962 systemd[1]: Started cri-containerd-48cf0186045cbd162423b5506c0ab78ccf760544222ca8338d8bc5efbbccd07d.scope - libcontainer container 48cf0186045cbd162423b5506c0ab78ccf760544222ca8338d8bc5efbbccd07d. Aug 13 00:43:01.845063 systemd[1]: Started cri-containerd-136819a251525cf0b0812ca30ec1968d987d191dc04757e0fb51d40e9f98e623.scope - libcontainer container 136819a251525cf0b0812ca30ec1968d987d191dc04757e0fb51d40e9f98e623. Aug 13 00:43:01.942072 containerd[1580]: time="2025-08-13T00:43:01.941868929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-rdqfp,Uid:59c0be2e-e469-4e3f-a89a-e4216172eca1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"48cf0186045cbd162423b5506c0ab78ccf760544222ca8338d8bc5efbbccd07d\"" Aug 13 00:43:01.946337 containerd[1580]: time="2025-08-13T00:43:01.946219732Z" level=info msg="StartContainer for \"136819a251525cf0b0812ca30ec1968d987d191dc04757e0fb51d40e9f98e623\" returns successfully" Aug 13 00:43:01.960322 containerd[1580]: time="2025-08-13T00:43:01.960090205Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:43:03.028715 kubelet[2840]: I0813 00:43:03.028007 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8gjv7" podStartSLOduration=2.027982895 podStartE2EDuration="2.027982895s" podCreationTimestamp="2025-08-13 00:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:43:02.049042377 +0000 UTC m=+4.316487250" watchObservedRunningTime="2025-08-13 00:43:03.027982895 +0000 UTC m=+5.295427764" Aug 13 00:43:03.176084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953361024.mount: Deactivated successfully. Aug 13 00:43:04.256897 containerd[1580]: time="2025-08-13T00:43:04.256820794Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:04.258276 containerd[1580]: time="2025-08-13T00:43:04.258219058Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 00:43:04.259890 containerd[1580]: time="2025-08-13T00:43:04.259815422Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:04.263083 containerd[1580]: time="2025-08-13T00:43:04.263003767Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:04.264990 containerd[1580]: time="2025-08-13T00:43:04.264379022Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.304214966s" Aug 13 00:43:04.264990 containerd[1580]: time="2025-08-13T00:43:04.264428489Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:43:04.271992 containerd[1580]: time="2025-08-13T00:43:04.271935428Z" level=info msg="CreateContainer within sandbox \"48cf0186045cbd162423b5506c0ab78ccf760544222ca8338d8bc5efbbccd07d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:43:04.283722 containerd[1580]: time="2025-08-13T00:43:04.281681631Z" level=info msg="Container 54f4b6b87479aa9f21db80910cadff1453ecb6fa4765235aea39f2182cbcf843: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:04.296426 containerd[1580]: time="2025-08-13T00:43:04.296349858Z" level=info msg="CreateContainer within sandbox \"48cf0186045cbd162423b5506c0ab78ccf760544222ca8338d8bc5efbbccd07d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"54f4b6b87479aa9f21db80910cadff1453ecb6fa4765235aea39f2182cbcf843\"" Aug 13 00:43:04.297749 containerd[1580]: time="2025-08-13T00:43:04.297714233Z" level=info msg="StartContainer for \"54f4b6b87479aa9f21db80910cadff1453ecb6fa4765235aea39f2182cbcf843\"" Aug 13 00:43:04.299765 containerd[1580]: time="2025-08-13T00:43:04.299727421Z" level=info msg="connecting to shim 54f4b6b87479aa9f21db80910cadff1453ecb6fa4765235aea39f2182cbcf843" address="unix:///run/containerd/s/4debb3f131b2dd695d0548153e1bd3174b590100e5e609103cf1096a95c2c969" protocol=ttrpc version=3 Aug 13 00:43:04.335053 systemd[1]: Started cri-containerd-54f4b6b87479aa9f21db80910cadff1453ecb6fa4765235aea39f2182cbcf843.scope - libcontainer container 54f4b6b87479aa9f21db80910cadff1453ecb6fa4765235aea39f2182cbcf843. Aug 13 00:43:04.396189 containerd[1580]: time="2025-08-13T00:43:04.396104729Z" level=info msg="StartContainer for \"54f4b6b87479aa9f21db80910cadff1453ecb6fa4765235aea39f2182cbcf843\" returns successfully" Aug 13 00:43:05.049202 kubelet[2840]: I0813 00:43:05.049120 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-rdqfp" podStartSLOduration=1.742014197 podStartE2EDuration="4.04909614s" podCreationTimestamp="2025-08-13 00:43:01 +0000 UTC" firstStartedPulling="2025-08-13 00:43:01.958957815 +0000 UTC m=+4.226402679" lastFinishedPulling="2025-08-13 00:43:04.266039778 +0000 UTC m=+6.533484622" observedRunningTime="2025-08-13 00:43:05.048942655 +0000 UTC m=+7.316387524" watchObservedRunningTime="2025-08-13 00:43:05.04909614 +0000 UTC m=+7.316540983" Aug 13 00:43:11.638010 sudo[1891]: pam_unix(sudo:session): session closed for user root Aug 13 00:43:11.683028 sshd[1890]: Connection closed by 139.178.68.195 port 39300 Aug 13 00:43:11.684393 sshd-session[1888]: pam_unix(sshd:session): session closed for user core Aug 13 00:43:11.694262 systemd[1]: sshd@8-10.128.0.26:22-139.178.68.195:39300.service: Deactivated successfully. Aug 13 00:43:11.694587 systemd-logind[1553]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:43:11.702356 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:43:11.702715 systemd[1]: session-9.scope: Consumed 8.064s CPU time, 229.1M memory peak. Aug 13 00:43:11.709947 systemd-logind[1553]: Removed session 9. Aug 13 00:43:17.330651 systemd[1]: Created slice kubepods-besteffort-pod61604259_53a2_4af1_8df1_7c621d4a1d13.slice - libcontainer container kubepods-besteffort-pod61604259_53a2_4af1_8df1_7c621d4a1d13.slice. Aug 13 00:43:17.358308 kubelet[2840]: I0813 00:43:17.358236 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61604259-53a2-4af1-8df1-7c621d4a1d13-tigera-ca-bundle\") pod \"calico-typha-784bfbb7cc-rfnp2\" (UID: \"61604259-53a2-4af1-8df1-7c621d4a1d13\") " pod="calico-system/calico-typha-784bfbb7cc-rfnp2" Aug 13 00:43:17.358308 kubelet[2840]: I0813 00:43:17.358316 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/61604259-53a2-4af1-8df1-7c621d4a1d13-typha-certs\") pod \"calico-typha-784bfbb7cc-rfnp2\" (UID: \"61604259-53a2-4af1-8df1-7c621d4a1d13\") " pod="calico-system/calico-typha-784bfbb7cc-rfnp2" Aug 13 00:43:17.359085 kubelet[2840]: I0813 00:43:17.358349 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8pms\" (UniqueName: \"kubernetes.io/projected/61604259-53a2-4af1-8df1-7c621d4a1d13-kube-api-access-j8pms\") pod \"calico-typha-784bfbb7cc-rfnp2\" (UID: \"61604259-53a2-4af1-8df1-7c621d4a1d13\") " pod="calico-system/calico-typha-784bfbb7cc-rfnp2" Aug 13 00:43:17.632591 systemd[1]: Created slice kubepods-besteffort-pod70b8f94a_6ccd_4742_9f38_f74cd1bc4396.slice - libcontainer container kubepods-besteffort-pod70b8f94a_6ccd_4742_9f38_f74cd1bc4396.slice. Aug 13 00:43:17.641841 containerd[1580]: time="2025-08-13T00:43:17.641796400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784bfbb7cc-rfnp2,Uid:61604259-53a2-4af1-8df1-7c621d4a1d13,Namespace:calico-system,Attempt:0,}" Aug 13 00:43:17.664420 kubelet[2840]: I0813 00:43:17.663849 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-node-certs\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.664420 kubelet[2840]: I0813 00:43:17.663926 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-var-lib-calico\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.664420 kubelet[2840]: I0813 00:43:17.663958 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-cni-log-dir\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.664420 kubelet[2840]: I0813 00:43:17.664000 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-flexvol-driver-host\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.664420 kubelet[2840]: I0813 00:43:17.664034 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-cni-bin-dir\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.670014 kubelet[2840]: I0813 00:43:17.664058 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-tigera-ca-bundle\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.670014 kubelet[2840]: I0813 00:43:17.664086 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnsz6\" (UniqueName: \"kubernetes.io/projected/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-kube-api-access-wnsz6\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.670014 kubelet[2840]: I0813 00:43:17.664113 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-cni-net-dir\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.670014 kubelet[2840]: I0813 00:43:17.664139 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-var-run-calico\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.670014 kubelet[2840]: I0813 00:43:17.664168 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-xtables-lock\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.670280 kubelet[2840]: I0813 00:43:17.664196 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-lib-modules\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.670280 kubelet[2840]: I0813 00:43:17.664227 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/70b8f94a-6ccd-4742-9f38-f74cd1bc4396-policysync\") pod \"calico-node-sb6kh\" (UID: \"70b8f94a-6ccd-4742-9f38-f74cd1bc4396\") " pod="calico-system/calico-node-sb6kh" Aug 13 00:43:17.693090 containerd[1580]: time="2025-08-13T00:43:17.693029300Z" level=info msg="connecting to shim a197c32eb19402fa18434737abc7a21f19d0c9b04280791c593ac20c3fd1a543" address="unix:///run/containerd/s/9650bb609e8963cbb1fa769a6c91af53907f89d7de6db72bae6a45fec4b280f8" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:17.758530 systemd[1]: Started cri-containerd-a197c32eb19402fa18434737abc7a21f19d0c9b04280791c593ac20c3fd1a543.scope - libcontainer container a197c32eb19402fa18434737abc7a21f19d0c9b04280791c593ac20c3fd1a543. Aug 13 00:43:17.776918 kubelet[2840]: E0813 00:43:17.775455 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:17.776918 kubelet[2840]: W0813 00:43:17.775763 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:17.776918 kubelet[2840]: E0813 00:43:17.775823 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:17.781720 kubelet[2840]: E0813 00:43:17.780681 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:17.781720 kubelet[2840]: W0813 00:43:17.780739 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:17.782121 kubelet[2840]: E0813 00:43:17.780770 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:17.784390 kubelet[2840]: E0813 00:43:17.782583 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:17.784744 kubelet[2840]: W0813 00:43:17.784484 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:17.784744 kubelet[2840]: E0813 00:43:17.784520 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:17.787359 kubelet[2840]: E0813 00:43:17.787293 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:17.787359 kubelet[2840]: W0813 00:43:17.787318 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:17.790803 kubelet[2840]: E0813 00:43:17.788346 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:17.792892 kubelet[2840]: E0813 00:43:17.792826 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:17.792892 kubelet[2840]: W0813 00:43:17.792856 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:17.793253 kubelet[2840]: E0813 00:43:17.793105 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:17.793627 kubelet[2840]: E0813 00:43:17.793603 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:17.793627 kubelet[2840]: W0813 00:43:17.793626 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:17.794363 kubelet[2840]: E0813 00:43:17.793649 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:17.794363 kubelet[2840]: E0813 00:43:17.794027 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:17.794363 kubelet[2840]: W0813 00:43:17.794041 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:17.794363 kubelet[2840]: E0813 00:43:17.794056 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:17.794621 kubelet[2840]: E0813 00:43:17.794445 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:17.794621 kubelet[2840]: W0813 00:43:17.794459 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:17.794621 kubelet[2840]: E0813 00:43:17.794475 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:17.795542 kubelet[2840]: E0813 00:43:17.794907 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:17.795542 kubelet[2840]: W0813 00:43:17.794921 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:17.795542 kubelet[2840]: E0813 00:43:17.794938 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:17.795542 kubelet[2840]: E0813 00:43:17.795314 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:17.795542 kubelet[2840]: W0813 00:43:17.795330 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:17.795542 kubelet[2840]: E0813 00:43:17.795346 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:17.802019 kubelet[2840]: E0813 00:43:17.801921 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:17.802019 kubelet[2840]: W0813 00:43:17.801951 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:17.802019 kubelet[2840]: E0813 00:43:17.801977 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:17.888433 containerd[1580]: time="2025-08-13T00:43:17.887520877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784bfbb7cc-rfnp2,Uid:61604259-53a2-4af1-8df1-7c621d4a1d13,Namespace:calico-system,Attempt:0,} returns sandbox id \"a197c32eb19402fa18434737abc7a21f19d0c9b04280791c593ac20c3fd1a543\"" Aug 13 00:43:17.892575 containerd[1580]: time="2025-08-13T00:43:17.892120542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:43:17.940543 containerd[1580]: time="2025-08-13T00:43:17.940467125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sb6kh,Uid:70b8f94a-6ccd-4742-9f38-f74cd1bc4396,Namespace:calico-system,Attempt:0,}" Aug 13 00:43:17.970298 kubelet[2840]: E0813 00:43:17.970151 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnp88" podUID="40a52536-acd7-4e83-951d-9cdacd5200e1" Aug 13 00:43:17.985887 containerd[1580]: time="2025-08-13T00:43:17.985799092Z" level=info msg="connecting to shim 944c0b66a9df22ef9c365eaf7bd3de9c23178b4dfa959fac9c9d45101d1d9f34" address="unix:///run/containerd/s/dc2bef38a22cf8b597c8cd30d28b32c33ebfd79ca261b30da7a3e2da5d2568c0" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:18.051339 systemd[1]: Started cri-containerd-944c0b66a9df22ef9c365eaf7bd3de9c23178b4dfa959fac9c9d45101d1d9f34.scope - libcontainer container 944c0b66a9df22ef9c365eaf7bd3de9c23178b4dfa959fac9c9d45101d1d9f34. Aug 13 00:43:18.054155 kubelet[2840]: E0813 00:43:18.053797 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.054155 kubelet[2840]: W0813 00:43:18.053864 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.055427 kubelet[2840]: E0813 00:43:18.054606 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.057200 kubelet[2840]: E0813 00:43:18.057091 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.057200 kubelet[2840]: W0813 00:43:18.057151 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.058128 kubelet[2840]: E0813 00:43:18.057726 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.059150 kubelet[2840]: E0813 00:43:18.059125 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.059418 kubelet[2840]: W0813 00:43:18.059277 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.059418 kubelet[2840]: E0813 00:43:18.059311 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.060550 kubelet[2840]: E0813 00:43:18.060527 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.060848 kubelet[2840]: W0813 00:43:18.060670 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.060848 kubelet[2840]: E0813 00:43:18.060739 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.061634 kubelet[2840]: E0813 00:43:18.061523 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.061634 kubelet[2840]: W0813 00:43:18.061544 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.061634 kubelet[2840]: E0813 00:43:18.061566 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.062729 kubelet[2840]: E0813 00:43:18.062599 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.062729 kubelet[2840]: W0813 00:43:18.062619 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.062729 kubelet[2840]: E0813 00:43:18.062639 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.064010 kubelet[2840]: E0813 00:43:18.063923 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.064010 kubelet[2840]: W0813 00:43:18.063975 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.064010 kubelet[2840]: E0813 00:43:18.063999 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.065218 kubelet[2840]: E0813 00:43:18.064755 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.065218 kubelet[2840]: W0813 00:43:18.064776 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.065218 kubelet[2840]: E0813 00:43:18.064795 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.065435 kubelet[2840]: E0813 00:43:18.065396 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.065435 kubelet[2840]: W0813 00:43:18.065412 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.065435 kubelet[2840]: E0813 00:43:18.065430 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.066162 kubelet[2840]: E0813 00:43:18.066020 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.066162 kubelet[2840]: W0813 00:43:18.066040 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.066162 kubelet[2840]: E0813 00:43:18.066056 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.066671 kubelet[2840]: E0813 00:43:18.066562 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.066671 kubelet[2840]: W0813 00:43:18.066579 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.066671 kubelet[2840]: E0813 00:43:18.066595 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.067270 kubelet[2840]: E0813 00:43:18.067215 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.067270 kubelet[2840]: W0813 00:43:18.067239 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.067420 kubelet[2840]: E0813 00:43:18.067257 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.068180 kubelet[2840]: E0813 00:43:18.068148 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.068180 kubelet[2840]: W0813 00:43:18.068176 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.068326 kubelet[2840]: E0813 00:43:18.068194 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.069715 kubelet[2840]: E0813 00:43:18.068580 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.069715 kubelet[2840]: W0813 00:43:18.068602 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.069715 kubelet[2840]: E0813 00:43:18.068618 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.069715 kubelet[2840]: E0813 00:43:18.068944 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.069715 kubelet[2840]: W0813 00:43:18.068957 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.069715 kubelet[2840]: E0813 00:43:18.068973 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.069715 kubelet[2840]: E0813 00:43:18.069240 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.069715 kubelet[2840]: W0813 00:43:18.069251 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.069715 kubelet[2840]: E0813 00:43:18.069292 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.070326 kubelet[2840]: E0813 00:43:18.069955 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.070326 kubelet[2840]: W0813 00:43:18.069990 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.070326 kubelet[2840]: E0813 00:43:18.070008 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.070326 kubelet[2840]: E0813 00:43:18.070268 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.070326 kubelet[2840]: W0813 00:43:18.070280 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.070326 kubelet[2840]: E0813 00:43:18.070295 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.070630 kubelet[2840]: E0813 00:43:18.070556 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.070630 kubelet[2840]: W0813 00:43:18.070567 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.070630 kubelet[2840]: E0813 00:43:18.070580 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.072337 kubelet[2840]: E0813 00:43:18.072295 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.072556 kubelet[2840]: W0813 00:43:18.072430 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.072556 kubelet[2840]: E0813 00:43:18.072452 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.073268 kubelet[2840]: E0813 00:43:18.073114 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.073268 kubelet[2840]: W0813 00:43:18.073132 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.073268 kubelet[2840]: E0813 00:43:18.073147 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.073268 kubelet[2840]: I0813 00:43:18.073180 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-459s5\" (UniqueName: \"kubernetes.io/projected/40a52536-acd7-4e83-951d-9cdacd5200e1-kube-api-access-459s5\") pod \"csi-node-driver-hnp88\" (UID: \"40a52536-acd7-4e83-951d-9cdacd5200e1\") " pod="calico-system/csi-node-driver-hnp88" Aug 13 00:43:18.074453 kubelet[2840]: E0813 00:43:18.074278 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.074762 kubelet[2840]: W0813 00:43:18.074583 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.074762 kubelet[2840]: E0813 00:43:18.074608 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.075463 kubelet[2840]: I0813 00:43:18.075439 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40a52536-acd7-4e83-951d-9cdacd5200e1-socket-dir\") pod \"csi-node-driver-hnp88\" (UID: \"40a52536-acd7-4e83-951d-9cdacd5200e1\") " pod="calico-system/csi-node-driver-hnp88" Aug 13 00:43:18.077985 kubelet[2840]: E0813 00:43:18.077725 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.077985 kubelet[2840]: W0813 00:43:18.077750 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.077985 kubelet[2840]: E0813 00:43:18.077770 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.079357 kubelet[2840]: I0813 00:43:18.078792 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40a52536-acd7-4e83-951d-9cdacd5200e1-registration-dir\") pod \"csi-node-driver-hnp88\" (UID: \"40a52536-acd7-4e83-951d-9cdacd5200e1\") " pod="calico-system/csi-node-driver-hnp88" Aug 13 00:43:18.080714 kubelet[2840]: E0813 00:43:18.080667 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.080874 kubelet[2840]: W0813 00:43:18.080840 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.080987 kubelet[2840]: E0813 00:43:18.080968 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.081548 kubelet[2840]: E0813 00:43:18.081517 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.082149 kubelet[2840]: W0813 00:43:18.081968 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.082149 kubelet[2840]: E0813 00:43:18.081999 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.082752 kubelet[2840]: E0813 00:43:18.082731 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.082926 kubelet[2840]: W0813 00:43:18.082909 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.083126 kubelet[2840]: E0813 00:43:18.083052 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.085001 kubelet[2840]: I0813 00:43:18.084844 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40a52536-acd7-4e83-951d-9cdacd5200e1-kubelet-dir\") pod \"csi-node-driver-hnp88\" (UID: \"40a52536-acd7-4e83-951d-9cdacd5200e1\") " pod="calico-system/csi-node-driver-hnp88" Aug 13 00:43:18.086833 kubelet[2840]: E0813 00:43:18.086359 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.086833 kubelet[2840]: W0813 00:43:18.086380 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.086833 kubelet[2840]: E0813 00:43:18.086401 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.089407 kubelet[2840]: E0813 00:43:18.089294 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.089836 kubelet[2840]: W0813 00:43:18.089567 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.089836 kubelet[2840]: E0813 00:43:18.089747 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.090995 kubelet[2840]: E0813 00:43:18.090885 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.090995 kubelet[2840]: W0813 00:43:18.090911 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.090995 kubelet[2840]: E0813 00:43:18.090931 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.091727 kubelet[2840]: E0813 00:43:18.091219 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.091727 kubelet[2840]: W0813 00:43:18.091233 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.091727 kubelet[2840]: E0813 00:43:18.091250 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.091727 kubelet[2840]: I0813 00:43:18.091291 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/40a52536-acd7-4e83-951d-9cdacd5200e1-varrun\") pod \"csi-node-driver-hnp88\" (UID: \"40a52536-acd7-4e83-951d-9cdacd5200e1\") " pod="calico-system/csi-node-driver-hnp88" Aug 13 00:43:18.092429 kubelet[2840]: E0813 00:43:18.091793 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.092429 kubelet[2840]: W0813 00:43:18.091812 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.092429 kubelet[2840]: E0813 00:43:18.091829 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.092429 kubelet[2840]: E0813 00:43:18.092398 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.092429 kubelet[2840]: W0813 00:43:18.092414 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.092429 kubelet[2840]: E0813 00:43:18.092430 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.094397 kubelet[2840]: E0813 00:43:18.092952 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.094397 kubelet[2840]: W0813 00:43:18.092965 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.094397 kubelet[2840]: E0813 00:43:18.092981 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.094397 kubelet[2840]: E0813 00:43:18.093713 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.094397 kubelet[2840]: W0813 00:43:18.093729 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.094397 kubelet[2840]: E0813 00:43:18.093748 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.095117 kubelet[2840]: E0813 00:43:18.094596 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.095117 kubelet[2840]: W0813 00:43:18.094612 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.095117 kubelet[2840]: E0813 00:43:18.094628 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.194022 kubelet[2840]: E0813 00:43:18.192709 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.194022 kubelet[2840]: W0813 00:43:18.192739 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.194022 kubelet[2840]: E0813 00:43:18.192767 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.194022 kubelet[2840]: E0813 00:43:18.193925 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.194022 kubelet[2840]: W0813 00:43:18.193945 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.194022 kubelet[2840]: E0813 00:43:18.193969 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.197022 kubelet[2840]: E0813 00:43:18.195308 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.197022 kubelet[2840]: W0813 00:43:18.195330 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.197022 kubelet[2840]: E0813 00:43:18.195353 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.198263 kubelet[2840]: E0813 00:43:18.197359 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.198263 kubelet[2840]: W0813 00:43:18.197377 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.198263 kubelet[2840]: E0813 00:43:18.197520 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.201185 kubelet[2840]: E0813 00:43:18.200931 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.201185 kubelet[2840]: W0813 00:43:18.200955 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.201185 kubelet[2840]: E0813 00:43:18.200979 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.203037 kubelet[2840]: E0813 00:43:18.202797 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.203037 kubelet[2840]: W0813 00:43:18.202820 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.203037 kubelet[2840]: E0813 00:43:18.202842 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.206454 kubelet[2840]: E0813 00:43:18.205774 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.206454 kubelet[2840]: W0813 00:43:18.205799 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.206454 kubelet[2840]: E0813 00:43:18.205823 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.206803 kubelet[2840]: E0813 00:43:18.206784 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.206913 kubelet[2840]: W0813 00:43:18.206896 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.207009 kubelet[2840]: E0813 00:43:18.206993 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.207804 kubelet[2840]: E0813 00:43:18.207783 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.208186 kubelet[2840]: W0813 00:43:18.207923 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.208186 kubelet[2840]: E0813 00:43:18.207948 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.208457 kubelet[2840]: E0813 00:43:18.208407 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.208457 kubelet[2840]: W0813 00:43:18.208426 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.208457 kubelet[2840]: E0813 00:43:18.208441 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.209773 kubelet[2840]: E0813 00:43:18.209754 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.211001 kubelet[2840]: W0813 00:43:18.210935 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.211001 kubelet[2840]: E0813 00:43:18.210965 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.216679 kubelet[2840]: E0813 00:43:18.216540 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.217048 kubelet[2840]: W0813 00:43:18.216978 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.217048 kubelet[2840]: E0813 00:43:18.217024 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.218885 kubelet[2840]: E0813 00:43:18.218841 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.219398 kubelet[2840]: W0813 00:43:18.219255 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.219398 kubelet[2840]: E0813 00:43:18.219294 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.220933 kubelet[2840]: E0813 00:43:18.220676 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.220933 kubelet[2840]: W0813 00:43:18.220715 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.220933 kubelet[2840]: E0813 00:43:18.220739 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.221521 kubelet[2840]: E0813 00:43:18.221337 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.221521 kubelet[2840]: W0813 00:43:18.221356 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.221521 kubelet[2840]: E0813 00:43:18.221373 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.222458 kubelet[2840]: E0813 00:43:18.222431 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.222458 kubelet[2840]: W0813 00:43:18.222455 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.222616 kubelet[2840]: E0813 00:43:18.222479 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.223335 kubelet[2840]: E0813 00:43:18.223293 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.223335 kubelet[2840]: W0813 00:43:18.223317 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.223335 kubelet[2840]: E0813 00:43:18.223336 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.224508 kubelet[2840]: E0813 00:43:18.224331 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.224508 kubelet[2840]: W0813 00:43:18.224356 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.224508 kubelet[2840]: E0813 00:43:18.224374 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.226420 kubelet[2840]: E0813 00:43:18.226329 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.226420 kubelet[2840]: W0813 00:43:18.226352 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.226802 kubelet[2840]: E0813 00:43:18.226476 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.227728 kubelet[2840]: E0813 00:43:18.227468 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.227728 kubelet[2840]: W0813 00:43:18.227485 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.227728 kubelet[2840]: E0813 00:43:18.227502 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.228956 kubelet[2840]: E0813 00:43:18.228927 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.229066 kubelet[2840]: W0813 00:43:18.228963 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.229066 kubelet[2840]: E0813 00:43:18.228982 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.231793 kubelet[2840]: E0813 00:43:18.231745 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.231793 kubelet[2840]: W0813 00:43:18.231784 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.231981 kubelet[2840]: E0813 00:43:18.231803 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.233572 kubelet[2840]: E0813 00:43:18.233489 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.233572 kubelet[2840]: W0813 00:43:18.233562 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.235440 kubelet[2840]: E0813 00:43:18.233585 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.236528 kubelet[2840]: E0813 00:43:18.236493 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.236632 kubelet[2840]: W0813 00:43:18.236576 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.236632 kubelet[2840]: E0813 00:43:18.236601 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.238091 kubelet[2840]: E0813 00:43:18.238047 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.238800 kubelet[2840]: W0813 00:43:18.238340 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.238800 kubelet[2840]: E0813 00:43:18.238376 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.258737 containerd[1580]: time="2025-08-13T00:43:18.258360111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sb6kh,Uid:70b8f94a-6ccd-4742-9f38-f74cd1bc4396,Namespace:calico-system,Attempt:0,} returns sandbox id \"944c0b66a9df22ef9c365eaf7bd3de9c23178b4dfa959fac9c9d45101d1d9f34\"" Aug 13 00:43:18.261761 kubelet[2840]: E0813 00:43:18.261511 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:18.261761 kubelet[2840]: W0813 00:43:18.261540 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:18.261761 kubelet[2840]: E0813 00:43:18.261571 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:18.897498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2639133447.mount: Deactivated successfully. Aug 13 00:43:19.944175 kubelet[2840]: E0813 00:43:19.944018 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnp88" podUID="40a52536-acd7-4e83-951d-9cdacd5200e1" Aug 13 00:43:20.372027 containerd[1580]: time="2025-08-13T00:43:20.371957952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:20.373646 containerd[1580]: time="2025-08-13T00:43:20.373404767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 00:43:20.375110 containerd[1580]: time="2025-08-13T00:43:20.375064861Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:20.378956 containerd[1580]: time="2025-08-13T00:43:20.378877863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:20.379960 containerd[1580]: time="2025-08-13T00:43:20.379653855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.487486987s" Aug 13 00:43:20.379960 containerd[1580]: time="2025-08-13T00:43:20.379715918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 00:43:20.382557 containerd[1580]: time="2025-08-13T00:43:20.382062766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:43:20.410135 containerd[1580]: time="2025-08-13T00:43:20.410007209Z" level=info msg="CreateContainer within sandbox \"a197c32eb19402fa18434737abc7a21f19d0c9b04280791c593ac20c3fd1a543\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:43:20.432459 containerd[1580]: time="2025-08-13T00:43:20.431907025Z" level=info msg="Container 93c200a1cceb4ef93a7f59f11f1d24c4bc4b9b550a2e6882aa104a5438c2ea4a: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:20.444829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876020140.mount: Deactivated successfully. Aug 13 00:43:20.464712 containerd[1580]: time="2025-08-13T00:43:20.464410220Z" level=info msg="CreateContainer within sandbox \"a197c32eb19402fa18434737abc7a21f19d0c9b04280791c593ac20c3fd1a543\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"93c200a1cceb4ef93a7f59f11f1d24c4bc4b9b550a2e6882aa104a5438c2ea4a\"" Aug 13 00:43:20.467770 containerd[1580]: time="2025-08-13T00:43:20.467727290Z" level=info msg="StartContainer for \"93c200a1cceb4ef93a7f59f11f1d24c4bc4b9b550a2e6882aa104a5438c2ea4a\"" Aug 13 00:43:20.469671 containerd[1580]: time="2025-08-13T00:43:20.469629178Z" level=info msg="connecting to shim 93c200a1cceb4ef93a7f59f11f1d24c4bc4b9b550a2e6882aa104a5438c2ea4a" address="unix:///run/containerd/s/9650bb609e8963cbb1fa769a6c91af53907f89d7de6db72bae6a45fec4b280f8" protocol=ttrpc version=3 Aug 13 00:43:20.528933 systemd[1]: Started cri-containerd-93c200a1cceb4ef93a7f59f11f1d24c4bc4b9b550a2e6882aa104a5438c2ea4a.scope - libcontainer container 93c200a1cceb4ef93a7f59f11f1d24c4bc4b9b550a2e6882aa104a5438c2ea4a. Aug 13 00:43:20.667560 containerd[1580]: time="2025-08-13T00:43:20.667146042Z" level=info msg="StartContainer for \"93c200a1cceb4ef93a7f59f11f1d24c4bc4b9b550a2e6882aa104a5438c2ea4a\" returns successfully" Aug 13 00:43:21.199910 kubelet[2840]: E0813 00:43:21.199861 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.199910 kubelet[2840]: W0813 00:43:21.199900 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.201072 kubelet[2840]: E0813 00:43:21.199930 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.201072 kubelet[2840]: E0813 00:43:21.200260 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.201072 kubelet[2840]: W0813 00:43:21.200279 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.201072 kubelet[2840]: E0813 00:43:21.200337 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.201072 kubelet[2840]: E0813 00:43:21.200649 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.201072 kubelet[2840]: W0813 00:43:21.200664 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.201072 kubelet[2840]: E0813 00:43:21.200681 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.201072 kubelet[2840]: E0813 00:43:21.201076 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.201795 kubelet[2840]: W0813 00:43:21.201089 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.201795 kubelet[2840]: E0813 00:43:21.201106 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.201795 kubelet[2840]: E0813 00:43:21.201431 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.201795 kubelet[2840]: W0813 00:43:21.201445 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.201795 kubelet[2840]: E0813 00:43:21.201461 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.201795 kubelet[2840]: E0813 00:43:21.201761 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.201795 kubelet[2840]: W0813 00:43:21.201774 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.201795 kubelet[2840]: E0813 00:43:21.201789 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.202321 kubelet[2840]: E0813 00:43:21.202067 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.202321 kubelet[2840]: W0813 00:43:21.202080 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.202321 kubelet[2840]: E0813 00:43:21.202094 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.202488 kubelet[2840]: E0813 00:43:21.202375 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.202488 kubelet[2840]: W0813 00:43:21.202387 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.202488 kubelet[2840]: E0813 00:43:21.202401 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.202721 kubelet[2840]: E0813 00:43:21.202707 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.202778 kubelet[2840]: W0813 00:43:21.202723 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.202778 kubelet[2840]: E0813 00:43:21.202738 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.203050 kubelet[2840]: E0813 00:43:21.203022 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.203050 kubelet[2840]: W0813 00:43:21.203045 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.203188 kubelet[2840]: E0813 00:43:21.203063 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.203377 kubelet[2840]: E0813 00:43:21.203352 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.203377 kubelet[2840]: W0813 00:43:21.203371 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.203522 kubelet[2840]: E0813 00:43:21.203387 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.203735 kubelet[2840]: E0813 00:43:21.203713 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.203735 kubelet[2840]: W0813 00:43:21.203732 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.203869 kubelet[2840]: E0813 00:43:21.203748 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.204169 kubelet[2840]: E0813 00:43:21.204020 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.204169 kubelet[2840]: W0813 00:43:21.204037 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.204169 kubelet[2840]: E0813 00:43:21.204052 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.205090 kubelet[2840]: E0813 00:43:21.204844 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.205090 kubelet[2840]: W0813 00:43:21.204863 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.205090 kubelet[2840]: E0813 00:43:21.204880 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.205439 kubelet[2840]: E0813 00:43:21.205401 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.205439 kubelet[2840]: W0813 00:43:21.205423 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.205577 kubelet[2840]: E0813 00:43:21.205440 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.225751 kubelet[2840]: E0813 00:43:21.225677 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.225751 kubelet[2840]: W0813 00:43:21.225742 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.226015 kubelet[2840]: E0813 00:43:21.225771 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.226166 kubelet[2840]: E0813 00:43:21.226146 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.226343 kubelet[2840]: W0813 00:43:21.226165 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.226343 kubelet[2840]: E0813 00:43:21.226186 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.226579 kubelet[2840]: E0813 00:43:21.226541 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.226579 kubelet[2840]: W0813 00:43:21.226562 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.226728 kubelet[2840]: E0813 00:43:21.226583 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.226953 kubelet[2840]: E0813 00:43:21.226931 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.226953 kubelet[2840]: W0813 00:43:21.226950 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.227103 kubelet[2840]: E0813 00:43:21.226968 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.227283 kubelet[2840]: E0813 00:43:21.227248 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.227283 kubelet[2840]: W0813 00:43:21.227270 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.227419 kubelet[2840]: E0813 00:43:21.227287 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.227611 kubelet[2840]: E0813 00:43:21.227590 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.227611 kubelet[2840]: W0813 00:43:21.227608 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.227812 kubelet[2840]: E0813 00:43:21.227624 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.228129 kubelet[2840]: E0813 00:43:21.227966 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.228129 kubelet[2840]: W0813 00:43:21.227985 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.228129 kubelet[2840]: E0813 00:43:21.228002 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.228557 kubelet[2840]: E0813 00:43:21.228535 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.228557 kubelet[2840]: W0813 00:43:21.228554 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.228747 kubelet[2840]: E0813 00:43:21.228572 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.228938 kubelet[2840]: E0813 00:43:21.228902 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.228938 kubelet[2840]: W0813 00:43:21.228921 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.228938 kubelet[2840]: E0813 00:43:21.228939 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.229235 kubelet[2840]: E0813 00:43:21.229210 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.229235 kubelet[2840]: W0813 00:43:21.229229 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.229370 kubelet[2840]: E0813 00:43:21.229245 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.229577 kubelet[2840]: E0813 00:43:21.229556 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.229577 kubelet[2840]: W0813 00:43:21.229574 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.229733 kubelet[2840]: E0813 00:43:21.229591 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.230049 kubelet[2840]: E0813 00:43:21.230030 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.230049 kubelet[2840]: W0813 00:43:21.230047 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.230190 kubelet[2840]: E0813 00:43:21.230063 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.230414 kubelet[2840]: E0813 00:43:21.230390 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.230485 kubelet[2840]: W0813 00:43:21.230414 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.230485 kubelet[2840]: E0813 00:43:21.230431 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.230813 kubelet[2840]: E0813 00:43:21.230789 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.230813 kubelet[2840]: W0813 00:43:21.230810 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.230957 kubelet[2840]: E0813 00:43:21.230826 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.231171 kubelet[2840]: E0813 00:43:21.231150 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.231171 kubelet[2840]: W0813 00:43:21.231168 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.231298 kubelet[2840]: E0813 00:43:21.231184 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.231474 kubelet[2840]: E0813 00:43:21.231456 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.231474 kubelet[2840]: W0813 00:43:21.231472 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.231594 kubelet[2840]: E0813 00:43:21.231487 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.232059 kubelet[2840]: E0813 00:43:21.232029 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.232059 kubelet[2840]: W0813 00:43:21.232054 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.232229 kubelet[2840]: E0813 00:43:21.232072 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.232706 kubelet[2840]: E0813 00:43:21.232662 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:43:21.232706 kubelet[2840]: W0813 00:43:21.232683 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:43:21.232882 kubelet[2840]: E0813 00:43:21.232728 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:43:21.861567 containerd[1580]: time="2025-08-13T00:43:21.861497949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:21.863654 containerd[1580]: time="2025-08-13T00:43:21.863131749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 00:43:21.865063 containerd[1580]: time="2025-08-13T00:43:21.865009797Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:21.868050 containerd[1580]: time="2025-08-13T00:43:21.867999895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:21.869230 containerd[1580]: time="2025-08-13T00:43:21.869180341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.487061998s" Aug 13 00:43:21.869383 containerd[1580]: time="2025-08-13T00:43:21.869355760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 00:43:21.875923 containerd[1580]: time="2025-08-13T00:43:21.875861761Z" level=info msg="CreateContainer within sandbox \"944c0b66a9df22ef9c365eaf7bd3de9c23178b4dfa959fac9c9d45101d1d9f34\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:43:21.891024 containerd[1580]: time="2025-08-13T00:43:21.890964958Z" level=info msg="Container 7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:21.905489 containerd[1580]: time="2025-08-13T00:43:21.905418613Z" level=info msg="CreateContainer within sandbox \"944c0b66a9df22ef9c365eaf7bd3de9c23178b4dfa959fac9c9d45101d1d9f34\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0\"" Aug 13 00:43:21.906919 containerd[1580]: time="2025-08-13T00:43:21.906877493Z" level=info msg="StartContainer for \"7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0\"" Aug 13 00:43:21.909343 containerd[1580]: time="2025-08-13T00:43:21.909300509Z" level=info msg="connecting to shim 7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0" address="unix:///run/containerd/s/dc2bef38a22cf8b597c8cd30d28b32c33ebfd79ca261b30da7a3e2da5d2568c0" protocol=ttrpc version=3 Aug 13 00:43:21.944657 kubelet[2840]: E0813 00:43:21.944588 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnp88" podUID="40a52536-acd7-4e83-951d-9cdacd5200e1" Aug 13 00:43:21.945520 systemd[1]: Started cri-containerd-7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0.scope - libcontainer container 7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0. Aug 13 00:43:22.019115 containerd[1580]: time="2025-08-13T00:43:22.018749138Z" level=info msg="StartContainer for \"7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0\" returns successfully" Aug 13 00:43:22.037304 systemd[1]: cri-containerd-7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0.scope: Deactivated successfully. Aug 13 00:43:22.042991 containerd[1580]: time="2025-08-13T00:43:22.042889571Z" level=info msg="received exit event container_id:\"7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0\" id:\"7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0\" pid:3528 exited_at:{seconds:1755045802 nanos:41536273}" Aug 13 00:43:22.042991 containerd[1580]: time="2025-08-13T00:43:22.042941246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0\" id:\"7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0\" pid:3528 exited_at:{seconds:1755045802 nanos:41536273}" Aug 13 00:43:22.088536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7caf6bf5396669797a995ad8f828731e48837bf2758e0f365977c765683e46c0-rootfs.mount: Deactivated successfully. Aug 13 00:43:22.126848 kubelet[2840]: I0813 00:43:22.125951 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:43:22.151283 kubelet[2840]: I0813 00:43:22.151166 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-784bfbb7cc-rfnp2" podStartSLOduration=2.6607144910000002 podStartE2EDuration="5.151140309s" podCreationTimestamp="2025-08-13 00:43:17 +0000 UTC" firstStartedPulling="2025-08-13 00:43:17.890792263 +0000 UTC m=+20.158237122" lastFinishedPulling="2025-08-13 00:43:20.381218077 +0000 UTC m=+22.648662940" observedRunningTime="2025-08-13 00:43:21.134487955 +0000 UTC m=+23.401932844" watchObservedRunningTime="2025-08-13 00:43:22.151140309 +0000 UTC m=+24.418585180" Aug 13 00:43:23.942397 kubelet[2840]: E0813 00:43:23.942280 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnp88" podUID="40a52536-acd7-4e83-951d-9cdacd5200e1" Aug 13 00:43:24.139513 containerd[1580]: time="2025-08-13T00:43:24.139464667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:43:25.942727 kubelet[2840]: E0813 00:43:25.942015 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnp88" podUID="40a52536-acd7-4e83-951d-9cdacd5200e1" Aug 13 00:43:27.857285 containerd[1580]: time="2025-08-13T00:43:27.857209603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:27.858426 containerd[1580]: time="2025-08-13T00:43:27.858386124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 00:43:27.860067 containerd[1580]: time="2025-08-13T00:43:27.860004758Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:27.863325 containerd[1580]: time="2025-08-13T00:43:27.863252284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:27.864343 containerd[1580]: time="2025-08-13T00:43:27.864249073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.724730102s" Aug 13 00:43:27.864343 containerd[1580]: time="2025-08-13T00:43:27.864305296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 00:43:27.870415 containerd[1580]: time="2025-08-13T00:43:27.870361213Z" level=info msg="CreateContainer within sandbox \"944c0b66a9df22ef9c365eaf7bd3de9c23178b4dfa959fac9c9d45101d1d9f34\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:43:27.881726 containerd[1580]: time="2025-08-13T00:43:27.880775296Z" level=info msg="Container 5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:27.895683 containerd[1580]: time="2025-08-13T00:43:27.895609875Z" level=info msg="CreateContainer within sandbox \"944c0b66a9df22ef9c365eaf7bd3de9c23178b4dfa959fac9c9d45101d1d9f34\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2\"" Aug 13 00:43:27.896422 containerd[1580]: time="2025-08-13T00:43:27.896391061Z" level=info msg="StartContainer for \"5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2\"" Aug 13 00:43:27.898855 containerd[1580]: time="2025-08-13T00:43:27.898731122Z" level=info msg="connecting to shim 5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2" address="unix:///run/containerd/s/dc2bef38a22cf8b597c8cd30d28b32c33ebfd79ca261b30da7a3e2da5d2568c0" protocol=ttrpc version=3 Aug 13 00:43:27.938002 systemd[1]: Started cri-containerd-5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2.scope - libcontainer container 5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2. Aug 13 00:43:27.945049 kubelet[2840]: E0813 00:43:27.944985 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnp88" podUID="40a52536-acd7-4e83-951d-9cdacd5200e1" Aug 13 00:43:28.021622 containerd[1580]: time="2025-08-13T00:43:28.021572286Z" level=info msg="StartContainer for \"5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2\" returns successfully" Aug 13 00:43:29.144802 containerd[1580]: time="2025-08-13T00:43:29.144722995Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:43:29.147506 systemd[1]: cri-containerd-5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2.scope: Deactivated successfully. Aug 13 00:43:29.148671 systemd[1]: cri-containerd-5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2.scope: Consumed 772ms CPU time, 192.6M memory peak, 171.2M written to disk. Aug 13 00:43:29.153507 containerd[1580]: time="2025-08-13T00:43:29.153462619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2\" id:\"5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2\" pid:3587 exited_at:{seconds:1755045809 nanos:152967747}" Aug 13 00:43:29.153792 containerd[1580]: time="2025-08-13T00:43:29.153762166Z" level=info msg="received exit event container_id:\"5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2\" id:\"5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2\" pid:3587 exited_at:{seconds:1755045809 nanos:152967747}" Aug 13 00:43:29.191029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ccaf29c491f64fdf9017a55cc74e0b64ae106d6de6d6e7db43b2fe48cf9a3e2-rootfs.mount: Deactivated successfully. Aug 13 00:43:29.236713 kubelet[2840]: I0813 00:43:29.236654 2840 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:43:29.461138 systemd[1]: Created slice kubepods-besteffort-pod1785cf06_2712_4bdb_974e_6b234b5e14b3.slice - libcontainer container kubepods-besteffort-pod1785cf06_2712_4bdb_974e_6b234b5e14b3.slice. Aug 13 00:43:29.487923 kubelet[2840]: I0813 00:43:29.487827 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1785cf06-2712-4bdb-974e-6b234b5e14b3-whisker-backend-key-pair\") pod \"whisker-65f978496c-9vxfm\" (UID: \"1785cf06-2712-4bdb-974e-6b234b5e14b3\") " pod="calico-system/whisker-65f978496c-9vxfm" Aug 13 00:43:29.487923 kubelet[2840]: I0813 00:43:29.487912 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1785cf06-2712-4bdb-974e-6b234b5e14b3-whisker-ca-bundle\") pod \"whisker-65f978496c-9vxfm\" (UID: \"1785cf06-2712-4bdb-974e-6b234b5e14b3\") " pod="calico-system/whisker-65f978496c-9vxfm" Aug 13 00:43:29.487923 kubelet[2840]: I0813 00:43:29.487949 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7h5h\" (UniqueName: \"kubernetes.io/projected/1785cf06-2712-4bdb-974e-6b234b5e14b3-kube-api-access-v7h5h\") pod \"whisker-65f978496c-9vxfm\" (UID: \"1785cf06-2712-4bdb-974e-6b234b5e14b3\") " pod="calico-system/whisker-65f978496c-9vxfm" Aug 13 00:43:29.690260 kubelet[2840]: I0813 00:43:29.690201 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvwhg\" (UniqueName: \"kubernetes.io/projected/7427794f-df0b-4eef-ab46-3e54c32f9a51-kube-api-access-cvwhg\") pod \"calico-apiserver-7d6f6c8bbc-5tds7\" (UID: \"7427794f-df0b-4eef-ab46-3e54c32f9a51\") " pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-5tds7" Aug 13 00:43:29.690557 kubelet[2840]: I0813 00:43:29.690374 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7427794f-df0b-4eef-ab46-3e54c32f9a51-calico-apiserver-certs\") pod \"calico-apiserver-7d6f6c8bbc-5tds7\" (UID: \"7427794f-df0b-4eef-ab46-3e54c32f9a51\") " pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-5tds7" Aug 13 00:43:29.772103 systemd[1]: Created slice kubepods-besteffort-pod7427794f_df0b_4eef_ab46_3e54c32f9a51.slice - libcontainer container kubepods-besteffort-pod7427794f_df0b_4eef_ab46_3e54c32f9a51.slice. Aug 13 00:43:29.778259 containerd[1580]: time="2025-08-13T00:43:29.778207997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f978496c-9vxfm,Uid:1785cf06-2712-4bdb-974e-6b234b5e14b3,Namespace:calico-system,Attempt:0,}" Aug 13 00:43:29.794055 kubelet[2840]: I0813 00:43:29.794006 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7255bc01-9448-4cab-923b-0cf690e45c02-calico-apiserver-certs\") pod \"calico-apiserver-7d6f6c8bbc-bxg4r\" (UID: \"7255bc01-9448-4cab-923b-0cf690e45c02\") " pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-bxg4r" Aug 13 00:43:29.794463 kubelet[2840]: I0813 00:43:29.794431 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggmpd\" (UniqueName: \"kubernetes.io/projected/7255bc01-9448-4cab-923b-0cf690e45c02-kube-api-access-ggmpd\") pod \"calico-apiserver-7d6f6c8bbc-bxg4r\" (UID: \"7255bc01-9448-4cab-923b-0cf690e45c02\") " pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-bxg4r" Aug 13 00:43:29.794649 kubelet[2840]: I0813 00:43:29.794626 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea81321d-5f1d-4834-988c-9415b7d9f826-tigera-ca-bundle\") pod \"calico-kube-controllers-79bd578d96-ndvjd\" (UID: \"ea81321d-5f1d-4834-988c-9415b7d9f826\") " pod="calico-system/calico-kube-controllers-79bd578d96-ndvjd" Aug 13 00:43:29.794822 kubelet[2840]: I0813 00:43:29.794801 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpbh8\" (UniqueName: \"kubernetes.io/projected/d5677ecd-db25-4716-b13e-6e82b6dd2619-kube-api-access-xpbh8\") pod \"coredns-674b8bbfcf-nsvj6\" (UID: \"d5677ecd-db25-4716-b13e-6e82b6dd2619\") " pod="kube-system/coredns-674b8bbfcf-nsvj6" Aug 13 00:43:29.794955 kubelet[2840]: I0813 00:43:29.794935 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgbjd\" (UniqueName: \"kubernetes.io/projected/ea81321d-5f1d-4834-988c-9415b7d9f826-kube-api-access-lgbjd\") pod \"calico-kube-controllers-79bd578d96-ndvjd\" (UID: \"ea81321d-5f1d-4834-988c-9415b7d9f826\") " pod="calico-system/calico-kube-controllers-79bd578d96-ndvjd" Aug 13 00:43:29.795064 kubelet[2840]: I0813 00:43:29.795047 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5677ecd-db25-4716-b13e-6e82b6dd2619-config-volume\") pod \"coredns-674b8bbfcf-nsvj6\" (UID: \"d5677ecd-db25-4716-b13e-6e82b6dd2619\") " pod="kube-system/coredns-674b8bbfcf-nsvj6" Aug 13 00:43:29.868185 systemd[1]: Created slice kubepods-besteffort-pod7255bc01_9448_4cab_923b_0cf690e45c02.slice - libcontainer container kubepods-besteffort-pod7255bc01_9448_4cab_923b_0cf690e45c02.slice. Aug 13 00:43:29.887758 systemd[1]: Created slice kubepods-besteffort-podea81321d_5f1d_4834_988c_9415b7d9f826.slice - libcontainer container kubepods-besteffort-podea81321d_5f1d_4834_988c_9415b7d9f826.slice. Aug 13 00:43:29.897897 kubelet[2840]: I0813 00:43:29.895540 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzxhq\" (UniqueName: \"kubernetes.io/projected/77bef1ca-58c6-4825-816c-72d9c90d0fe4-kube-api-access-qzxhq\") pod \"coredns-674b8bbfcf-xchqv\" (UID: \"77bef1ca-58c6-4825-816c-72d9c90d0fe4\") " pod="kube-system/coredns-674b8bbfcf-xchqv" Aug 13 00:43:29.897897 kubelet[2840]: I0813 00:43:29.895647 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77bef1ca-58c6-4825-816c-72d9c90d0fe4-config-volume\") pod \"coredns-674b8bbfcf-xchqv\" (UID: \"77bef1ca-58c6-4825-816c-72d9c90d0fe4\") " pod="kube-system/coredns-674b8bbfcf-xchqv" Aug 13 00:43:29.897897 kubelet[2840]: I0813 00:43:29.895769 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2-config\") pod \"goldmane-768f4c5c69-vklsd\" (UID: \"b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2\") " pod="calico-system/goldmane-768f4c5c69-vklsd" Aug 13 00:43:29.897897 kubelet[2840]: I0813 00:43:29.895810 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl9d9\" (UniqueName: \"kubernetes.io/projected/b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2-kube-api-access-xl9d9\") pod \"goldmane-768f4c5c69-vklsd\" (UID: \"b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2\") " pod="calico-system/goldmane-768f4c5c69-vklsd" Aug 13 00:43:29.897897 kubelet[2840]: I0813 00:43:29.895955 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-vklsd\" (UID: \"b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2\") " pod="calico-system/goldmane-768f4c5c69-vklsd" Aug 13 00:43:29.898277 kubelet[2840]: I0813 00:43:29.896146 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2-goldmane-key-pair\") pod \"goldmane-768f4c5c69-vklsd\" (UID: \"b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2\") " pod="calico-system/goldmane-768f4c5c69-vklsd" Aug 13 00:43:29.923220 systemd[1]: Created slice kubepods-burstable-podd5677ecd_db25_4716_b13e_6e82b6dd2619.slice - libcontainer container kubepods-burstable-podd5677ecd_db25_4716_b13e_6e82b6dd2619.slice. Aug 13 00:43:29.956991 systemd[1]: Created slice kubepods-besteffort-podb84d4379_b5a4_4f26_b0c2_e2f82ba95fd2.slice - libcontainer container kubepods-besteffort-podb84d4379_b5a4_4f26_b0c2_e2f82ba95fd2.slice. Aug 13 00:43:29.985208 systemd[1]: Created slice kubepods-burstable-pod77bef1ca_58c6_4825_816c_72d9c90d0fe4.slice - libcontainer container kubepods-burstable-pod77bef1ca_58c6_4825_816c_72d9c90d0fe4.slice. Aug 13 00:43:30.003946 systemd[1]: Created slice kubepods-besteffort-pod40a52536_acd7_4e83_951d_9cdacd5200e1.slice - libcontainer container kubepods-besteffort-pod40a52536_acd7_4e83_951d_9cdacd5200e1.slice. Aug 13 00:43:30.024430 containerd[1580]: time="2025-08-13T00:43:30.024299158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnp88,Uid:40a52536-acd7-4e83-951d-9cdacd5200e1,Namespace:calico-system,Attempt:0,}" Aug 13 00:43:30.031993 containerd[1580]: time="2025-08-13T00:43:30.031789031Z" level=error msg="Failed to destroy network for sandbox \"b2e4c4ff1617d484acc45f457903dec49561d76ec6166bf84fe18fca3739189e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.040394 containerd[1580]: time="2025-08-13T00:43:30.039913390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f978496c-9vxfm,Uid:1785cf06-2712-4bdb-974e-6b234b5e14b3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e4c4ff1617d484acc45f457903dec49561d76ec6166bf84fe18fca3739189e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.041118 kubelet[2840]: E0813 00:43:30.041049 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e4c4ff1617d484acc45f457903dec49561d76ec6166bf84fe18fca3739189e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.043271 kubelet[2840]: E0813 00:43:30.043215 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e4c4ff1617d484acc45f457903dec49561d76ec6166bf84fe18fca3739189e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65f978496c-9vxfm" Aug 13 00:43:30.043392 kubelet[2840]: E0813 00:43:30.043274 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e4c4ff1617d484acc45f457903dec49561d76ec6166bf84fe18fca3739189e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65f978496c-9vxfm" Aug 13 00:43:30.043464 kubelet[2840]: E0813 00:43:30.043376 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65f978496c-9vxfm_calico-system(1785cf06-2712-4bdb-974e-6b234b5e14b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65f978496c-9vxfm_calico-system(1785cf06-2712-4bdb-974e-6b234b5e14b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2e4c4ff1617d484acc45f457903dec49561d76ec6166bf84fe18fca3739189e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65f978496c-9vxfm" podUID="1785cf06-2712-4bdb-974e-6b234b5e14b3" Aug 13 00:43:30.091344 containerd[1580]: time="2025-08-13T00:43:30.091294980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6f6c8bbc-5tds7,Uid:7427794f-df0b-4eef-ab46-3e54c32f9a51,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:43:30.143929 containerd[1580]: time="2025-08-13T00:43:30.143812655Z" level=error msg="Failed to destroy network for sandbox \"c1a7f566abc687aee0fc7746927feb668f89c174cfed0af8427dc25ff0a79202\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.145721 containerd[1580]: time="2025-08-13T00:43:30.145642251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnp88,Uid:40a52536-acd7-4e83-951d-9cdacd5200e1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a7f566abc687aee0fc7746927feb668f89c174cfed0af8427dc25ff0a79202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.146995 kubelet[2840]: E0813 00:43:30.146091 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a7f566abc687aee0fc7746927feb668f89c174cfed0af8427dc25ff0a79202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.146995 kubelet[2840]: E0813 00:43:30.146202 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a7f566abc687aee0fc7746927feb668f89c174cfed0af8427dc25ff0a79202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hnp88" Aug 13 00:43:30.146995 kubelet[2840]: E0813 00:43:30.146239 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a7f566abc687aee0fc7746927feb668f89c174cfed0af8427dc25ff0a79202\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hnp88" Aug 13 00:43:30.147188 kubelet[2840]: E0813 00:43:30.146338 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hnp88_calico-system(40a52536-acd7-4e83-951d-9cdacd5200e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hnp88_calico-system(40a52536-acd7-4e83-951d-9cdacd5200e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1a7f566abc687aee0fc7746927feb668f89c174cfed0af8427dc25ff0a79202\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hnp88" podUID="40a52536-acd7-4e83-951d-9cdacd5200e1" Aug 13 00:43:30.167241 containerd[1580]: time="2025-08-13T00:43:30.167187603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:43:30.183712 containerd[1580]: time="2025-08-13T00:43:30.183450169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6f6c8bbc-bxg4r,Uid:7255bc01-9448-4cab-923b-0cf690e45c02,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:43:30.215429 containerd[1580]: time="2025-08-13T00:43:30.215374698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bd578d96-ndvjd,Uid:ea81321d-5f1d-4834-988c-9415b7d9f826,Namespace:calico-system,Attempt:0,}" Aug 13 00:43:30.242757 containerd[1580]: time="2025-08-13T00:43:30.237529566Z" level=error msg="Failed to destroy network for sandbox \"60d12d404caf4ceed3282eb32bab4719fb532c7b43d5e29088289aa1016faeff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.242757 containerd[1580]: time="2025-08-13T00:43:30.241602178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nsvj6,Uid:d5677ecd-db25-4716-b13e-6e82b6dd2619,Namespace:kube-system,Attempt:0,}" Aug 13 00:43:30.240805 systemd[1]: run-netns-cni\x2d70a17e64\x2df55d\x2dc281\x2dbae8\x2dac2e71bdc99e.mount: Deactivated successfully. Aug 13 00:43:30.255519 systemd[1]: run-netns-cni\x2dea38247c\x2d459e\x2de63a\x2d4f5d\x2d23109fcb5973.mount: Deactivated successfully. Aug 13 00:43:30.273313 containerd[1580]: time="2025-08-13T00:43:30.273258068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vklsd,Uid:b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2,Namespace:calico-system,Attempt:0,}" Aug 13 00:43:30.276650 containerd[1580]: time="2025-08-13T00:43:30.276365210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6f6c8bbc-5tds7,Uid:7427794f-df0b-4eef-ab46-3e54c32f9a51,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60d12d404caf4ceed3282eb32bab4719fb532c7b43d5e29088289aa1016faeff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.279526 kubelet[2840]: E0813 00:43:30.278910 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60d12d404caf4ceed3282eb32bab4719fb532c7b43d5e29088289aa1016faeff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.281940 kubelet[2840]: E0813 00:43:30.280299 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60d12d404caf4ceed3282eb32bab4719fb532c7b43d5e29088289aa1016faeff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-5tds7" Aug 13 00:43:30.281940 kubelet[2840]: E0813 00:43:30.280385 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60d12d404caf4ceed3282eb32bab4719fb532c7b43d5e29088289aa1016faeff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-5tds7" Aug 13 00:43:30.284466 kubelet[2840]: E0813 00:43:30.280500 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6f6c8bbc-5tds7_calico-apiserver(7427794f-df0b-4eef-ab46-3e54c32f9a51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6f6c8bbc-5tds7_calico-apiserver(7427794f-df0b-4eef-ab46-3e54c32f9a51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60d12d404caf4ceed3282eb32bab4719fb532c7b43d5e29088289aa1016faeff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-5tds7" podUID="7427794f-df0b-4eef-ab46-3e54c32f9a51" Aug 13 00:43:30.306815 containerd[1580]: time="2025-08-13T00:43:30.306635566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xchqv,Uid:77bef1ca-58c6-4825-816c-72d9c90d0fe4,Namespace:kube-system,Attempt:0,}" Aug 13 00:43:30.471178 containerd[1580]: time="2025-08-13T00:43:30.471105817Z" level=error msg="Failed to destroy network for sandbox \"10b829183c7b03f2cb457d41c4333656217d9eeeadc0dfa733ea3cb8ca3ac060\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.474013 containerd[1580]: time="2025-08-13T00:43:30.473884839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6f6c8bbc-bxg4r,Uid:7255bc01-9448-4cab-923b-0cf690e45c02,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10b829183c7b03f2cb457d41c4333656217d9eeeadc0dfa733ea3cb8ca3ac060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.474336 kubelet[2840]: E0813 00:43:30.474237 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10b829183c7b03f2cb457d41c4333656217d9eeeadc0dfa733ea3cb8ca3ac060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.474336 kubelet[2840]: E0813 00:43:30.474307 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10b829183c7b03f2cb457d41c4333656217d9eeeadc0dfa733ea3cb8ca3ac060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-bxg4r" Aug 13 00:43:30.475765 kubelet[2840]: E0813 00:43:30.474337 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10b829183c7b03f2cb457d41c4333656217d9eeeadc0dfa733ea3cb8ca3ac060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-bxg4r" Aug 13 00:43:30.475765 kubelet[2840]: E0813 00:43:30.474418 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6f6c8bbc-bxg4r_calico-apiserver(7255bc01-9448-4cab-923b-0cf690e45c02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6f6c8bbc-bxg4r_calico-apiserver(7255bc01-9448-4cab-923b-0cf690e45c02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10b829183c7b03f2cb457d41c4333656217d9eeeadc0dfa733ea3cb8ca3ac060\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-bxg4r" podUID="7255bc01-9448-4cab-923b-0cf690e45c02" Aug 13 00:43:30.552546 containerd[1580]: time="2025-08-13T00:43:30.552080230Z" level=error msg="Failed to destroy network for sandbox \"375430838237bed8273f059fa81859658c20a5056cbdc8bb791b40eddbe6ffca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.559052 containerd[1580]: time="2025-08-13T00:43:30.558749998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bd578d96-ndvjd,Uid:ea81321d-5f1d-4834-988c-9415b7d9f826,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"375430838237bed8273f059fa81859658c20a5056cbdc8bb791b40eddbe6ffca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.559922 kubelet[2840]: E0813 00:43:30.559873 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"375430838237bed8273f059fa81859658c20a5056cbdc8bb791b40eddbe6ffca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.560446 kubelet[2840]: E0813 00:43:30.560125 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"375430838237bed8273f059fa81859658c20a5056cbdc8bb791b40eddbe6ffca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bd578d96-ndvjd" Aug 13 00:43:30.560446 kubelet[2840]: E0813 00:43:30.560283 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"375430838237bed8273f059fa81859658c20a5056cbdc8bb791b40eddbe6ffca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bd578d96-ndvjd" Aug 13 00:43:30.560446 kubelet[2840]: E0813 00:43:30.560367 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79bd578d96-ndvjd_calico-system(ea81321d-5f1d-4834-988c-9415b7d9f826)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79bd578d96-ndvjd_calico-system(ea81321d-5f1d-4834-988c-9415b7d9f826)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"375430838237bed8273f059fa81859658c20a5056cbdc8bb791b40eddbe6ffca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79bd578d96-ndvjd" podUID="ea81321d-5f1d-4834-988c-9415b7d9f826" Aug 13 00:43:30.569996 containerd[1580]: time="2025-08-13T00:43:30.569927176Z" level=error msg="Failed to destroy network for sandbox \"135d860bae7c9f0b75b7519e3b6eab1ed81ab07038b2ba7ae4b36c8c61e9fdc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.572438 containerd[1580]: time="2025-08-13T00:43:30.572318897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nsvj6,Uid:d5677ecd-db25-4716-b13e-6e82b6dd2619,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"135d860bae7c9f0b75b7519e3b6eab1ed81ab07038b2ba7ae4b36c8c61e9fdc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.573500 kubelet[2840]: E0813 00:43:30.572791 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"135d860bae7c9f0b75b7519e3b6eab1ed81ab07038b2ba7ae4b36c8c61e9fdc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.574904 kubelet[2840]: E0813 00:43:30.573751 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"135d860bae7c9f0b75b7519e3b6eab1ed81ab07038b2ba7ae4b36c8c61e9fdc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nsvj6" Aug 13 00:43:30.574904 kubelet[2840]: E0813 00:43:30.573794 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"135d860bae7c9f0b75b7519e3b6eab1ed81ab07038b2ba7ae4b36c8c61e9fdc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nsvj6" Aug 13 00:43:30.574904 kubelet[2840]: E0813 00:43:30.573916 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nsvj6_kube-system(d5677ecd-db25-4716-b13e-6e82b6dd2619)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nsvj6_kube-system(d5677ecd-db25-4716-b13e-6e82b6dd2619)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"135d860bae7c9f0b75b7519e3b6eab1ed81ab07038b2ba7ae4b36c8c61e9fdc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nsvj6" podUID="d5677ecd-db25-4716-b13e-6e82b6dd2619" Aug 13 00:43:30.576428 containerd[1580]: time="2025-08-13T00:43:30.576273924Z" level=error msg="Failed to destroy network for sandbox \"62cd6d201239fdf23ae8027695bd0dfe7a97460fa7627d6b0a563cfad957c7ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.578797 containerd[1580]: time="2025-08-13T00:43:30.578725133Z" level=error msg="Failed to destroy network for sandbox \"7e6123e6f1f92f74ed72f971c672f3467929523a2b1294afb11d7d8eed1d40be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.579286 containerd[1580]: time="2025-08-13T00:43:30.579208741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xchqv,Uid:77bef1ca-58c6-4825-816c-72d9c90d0fe4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62cd6d201239fdf23ae8027695bd0dfe7a97460fa7627d6b0a563cfad957c7ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.579896 kubelet[2840]: E0813 00:43:30.579649 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62cd6d201239fdf23ae8027695bd0dfe7a97460fa7627d6b0a563cfad957c7ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.579896 kubelet[2840]: E0813 00:43:30.579774 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62cd6d201239fdf23ae8027695bd0dfe7a97460fa7627d6b0a563cfad957c7ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xchqv" Aug 13 00:43:30.579896 kubelet[2840]: E0813 00:43:30.579807 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62cd6d201239fdf23ae8027695bd0dfe7a97460fa7627d6b0a563cfad957c7ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xchqv" Aug 13 00:43:30.580499 kubelet[2840]: E0813 00:43:30.579899 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xchqv_kube-system(77bef1ca-58c6-4825-816c-72d9c90d0fe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xchqv_kube-system(77bef1ca-58c6-4825-816c-72d9c90d0fe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62cd6d201239fdf23ae8027695bd0dfe7a97460fa7627d6b0a563cfad957c7ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xchqv" podUID="77bef1ca-58c6-4825-816c-72d9c90d0fe4" Aug 13 00:43:30.581411 containerd[1580]: time="2025-08-13T00:43:30.581275386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vklsd,Uid:b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e6123e6f1f92f74ed72f971c672f3467929523a2b1294afb11d7d8eed1d40be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.582659 kubelet[2840]: E0813 00:43:30.582605 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e6123e6f1f92f74ed72f971c672f3467929523a2b1294afb11d7d8eed1d40be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:43:30.582659 kubelet[2840]: E0813 00:43:30.582701 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e6123e6f1f92f74ed72f971c672f3467929523a2b1294afb11d7d8eed1d40be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-vklsd" Aug 13 00:43:30.582659 kubelet[2840]: E0813 00:43:30.582735 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e6123e6f1f92f74ed72f971c672f3467929523a2b1294afb11d7d8eed1d40be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-vklsd" Aug 13 00:43:30.584202 kubelet[2840]: E0813 00:43:30.582837 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-vklsd_calico-system(b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-vklsd_calico-system(b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e6123e6f1f92f74ed72f971c672f3467929523a2b1294afb11d7d8eed1d40be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-vklsd" podUID="b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2" Aug 13 00:43:31.198822 systemd[1]: run-netns-cni\x2daf4bbc7a\x2d3423\x2d416d\x2d2569\x2df4eb9ccae315.mount: Deactivated successfully. Aug 13 00:43:31.199444 systemd[1]: run-netns-cni\x2dfbd67319\x2df531\x2df4f1\x2de1bf\x2de015d78d67f5.mount: Deactivated successfully. Aug 13 00:43:31.199947 systemd[1]: run-netns-cni\x2ddc5880ca\x2dd5d0\x2dd983\x2d6525\x2d794ed0a9e9fc.mount: Deactivated successfully. Aug 13 00:43:31.200275 systemd[1]: run-netns-cni\x2d0eccde84\x2d4141\x2d4819\x2d5daf\x2da565241b3655.mount: Deactivated successfully. Aug 13 00:43:31.200505 systemd[1]: run-netns-cni\x2d976f32d4\x2dbd87\x2d4b0f\x2d7780\x2dd6ed0ea280ae.mount: Deactivated successfully. Aug 13 00:43:37.537255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600972169.mount: Deactivated successfully. Aug 13 00:43:37.580945 containerd[1580]: time="2025-08-13T00:43:37.580873558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:37.582419 containerd[1580]: time="2025-08-13T00:43:37.582146559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 00:43:37.583826 containerd[1580]: time="2025-08-13T00:43:37.583762877Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:37.587213 containerd[1580]: time="2025-08-13T00:43:37.587154158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:37.588173 containerd[1580]: time="2025-08-13T00:43:37.588132437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.420856601s" Aug 13 00:43:37.588379 containerd[1580]: time="2025-08-13T00:43:37.588353067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 00:43:37.616727 containerd[1580]: time="2025-08-13T00:43:37.616576597Z" level=info msg="CreateContainer within sandbox \"944c0b66a9df22ef9c365eaf7bd3de9c23178b4dfa959fac9c9d45101d1d9f34\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:43:37.631716 containerd[1580]: time="2025-08-13T00:43:37.631640480Z" level=info msg="Container 8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:37.647579 containerd[1580]: time="2025-08-13T00:43:37.647510309Z" level=info msg="CreateContainer within sandbox \"944c0b66a9df22ef9c365eaf7bd3de9c23178b4dfa959fac9c9d45101d1d9f34\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979\"" Aug 13 00:43:37.648472 containerd[1580]: time="2025-08-13T00:43:37.648400291Z" level=info msg="StartContainer for \"8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979\"" Aug 13 00:43:37.650749 containerd[1580]: time="2025-08-13T00:43:37.650707905Z" level=info msg="connecting to shim 8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979" address="unix:///run/containerd/s/dc2bef38a22cf8b597c8cd30d28b32c33ebfd79ca261b30da7a3e2da5d2568c0" protocol=ttrpc version=3 Aug 13 00:43:37.683462 systemd[1]: Started cri-containerd-8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979.scope - libcontainer container 8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979. Aug 13 00:43:37.773819 containerd[1580]: time="2025-08-13T00:43:37.773764962Z" level=info msg="StartContainer for \"8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979\" returns successfully" Aug 13 00:43:37.916098 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:43:37.916280 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:43:38.168822 kubelet[2840]: I0813 00:43:38.168088 2840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7h5h\" (UniqueName: \"kubernetes.io/projected/1785cf06-2712-4bdb-974e-6b234b5e14b3-kube-api-access-v7h5h\") pod \"1785cf06-2712-4bdb-974e-6b234b5e14b3\" (UID: \"1785cf06-2712-4bdb-974e-6b234b5e14b3\") " Aug 13 00:43:38.171199 kubelet[2840]: I0813 00:43:38.170360 2840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1785cf06-2712-4bdb-974e-6b234b5e14b3-whisker-backend-key-pair\") pod \"1785cf06-2712-4bdb-974e-6b234b5e14b3\" (UID: \"1785cf06-2712-4bdb-974e-6b234b5e14b3\") " Aug 13 00:43:38.171199 kubelet[2840]: I0813 00:43:38.170425 2840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1785cf06-2712-4bdb-974e-6b234b5e14b3-whisker-ca-bundle\") pod \"1785cf06-2712-4bdb-974e-6b234b5e14b3\" (UID: \"1785cf06-2712-4bdb-974e-6b234b5e14b3\") " Aug 13 00:43:38.173631 kubelet[2840]: I0813 00:43:38.173568 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1785cf06-2712-4bdb-974e-6b234b5e14b3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1785cf06-2712-4bdb-974e-6b234b5e14b3" (UID: "1785cf06-2712-4bdb-974e-6b234b5e14b3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:43:38.179063 kubelet[2840]: I0813 00:43:38.179005 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1785cf06-2712-4bdb-974e-6b234b5e14b3-kube-api-access-v7h5h" (OuterVolumeSpecName: "kube-api-access-v7h5h") pod "1785cf06-2712-4bdb-974e-6b234b5e14b3" (UID: "1785cf06-2712-4bdb-974e-6b234b5e14b3"). InnerVolumeSpecName "kube-api-access-v7h5h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:43:38.180168 kubelet[2840]: I0813 00:43:38.180096 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1785cf06-2712-4bdb-974e-6b234b5e14b3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1785cf06-2712-4bdb-974e-6b234b5e14b3" (UID: "1785cf06-2712-4bdb-974e-6b234b5e14b3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:43:38.213339 systemd[1]: Removed slice kubepods-besteffort-pod1785cf06_2712_4bdb_974e_6b234b5e14b3.slice - libcontainer container kubepods-besteffort-pod1785cf06_2712_4bdb_974e_6b234b5e14b3.slice. Aug 13 00:43:38.255576 kubelet[2840]: I0813 00:43:38.255490 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sb6kh" podStartSLOduration=1.934518497 podStartE2EDuration="21.255466112s" podCreationTimestamp="2025-08-13 00:43:17 +0000 UTC" firstStartedPulling="2025-08-13 00:43:18.268589438 +0000 UTC m=+20.536034285" lastFinishedPulling="2025-08-13 00:43:37.58953705 +0000 UTC m=+39.856981900" observedRunningTime="2025-08-13 00:43:38.23396019 +0000 UTC m=+40.501405048" watchObservedRunningTime="2025-08-13 00:43:38.255466112 +0000 UTC m=+40.522910982" Aug 13 00:43:38.271511 kubelet[2840]: I0813 00:43:38.271457 2840 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1785cf06-2712-4bdb-974e-6b234b5e14b3-whisker-backend-key-pair\") on node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 00:43:38.271511 kubelet[2840]: I0813 00:43:38.271508 2840 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1785cf06-2712-4bdb-974e-6b234b5e14b3-whisker-ca-bundle\") on node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 00:43:38.271511 kubelet[2840]: I0813 00:43:38.271526 2840 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v7h5h\" (UniqueName: \"kubernetes.io/projected/1785cf06-2712-4bdb-974e-6b234b5e14b3-kube-api-access-v7h5h\") on node \"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal\" DevicePath \"\"" Aug 13 00:43:38.358433 systemd[1]: Created slice kubepods-besteffort-pod5fdbf962_07b6_454c_9fe2_555279763358.slice - libcontainer container kubepods-besteffort-pod5fdbf962_07b6_454c_9fe2_555279763358.slice. Aug 13 00:43:38.371839 kubelet[2840]: I0813 00:43:38.371787 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5fdbf962-07b6-454c-9fe2-555279763358-whisker-backend-key-pair\") pod \"whisker-757f9d6b7c-hhljw\" (UID: \"5fdbf962-07b6-454c-9fe2-555279763358\") " pod="calico-system/whisker-757f9d6b7c-hhljw" Aug 13 00:43:38.372142 kubelet[2840]: I0813 00:43:38.372118 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fdbf962-07b6-454c-9fe2-555279763358-whisker-ca-bundle\") pod \"whisker-757f9d6b7c-hhljw\" (UID: \"5fdbf962-07b6-454c-9fe2-555279763358\") " pod="calico-system/whisker-757f9d6b7c-hhljw" Aug 13 00:43:38.372340 kubelet[2840]: I0813 00:43:38.372317 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ddt\" (UniqueName: \"kubernetes.io/projected/5fdbf962-07b6-454c-9fe2-555279763358-kube-api-access-c2ddt\") pod \"whisker-757f9d6b7c-hhljw\" (UID: \"5fdbf962-07b6-454c-9fe2-555279763358\") " pod="calico-system/whisker-757f9d6b7c-hhljw" Aug 13 00:43:38.538338 systemd[1]: var-lib-kubelet-pods-1785cf06\x2d2712\x2d4bdb\x2d974e\x2d6b234b5e14b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv7h5h.mount: Deactivated successfully. Aug 13 00:43:38.538530 systemd[1]: var-lib-kubelet-pods-1785cf06\x2d2712\x2d4bdb\x2d974e\x2d6b234b5e14b3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:43:38.672406 containerd[1580]: time="2025-08-13T00:43:38.672344607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757f9d6b7c-hhljw,Uid:5fdbf962-07b6-454c-9fe2-555279763358,Namespace:calico-system,Attempt:0,}" Aug 13 00:43:38.833428 systemd-networkd[1477]: cali09acf21f181: Link UP Aug 13 00:43:38.834225 systemd-networkd[1477]: cali09acf21f181: Gained carrier Aug 13 00:43:38.859490 containerd[1580]: 2025-08-13 00:43:38.713 [INFO][3926] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:43:38.859490 containerd[1580]: 2025-08-13 00:43:38.729 [INFO][3926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0 whisker-757f9d6b7c- calico-system 5fdbf962-07b6-454c-9fe2-555279763358 878 0 2025-08-13 00:43:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:757f9d6b7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal whisker-757f9d6b7c-hhljw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali09acf21f181 [] [] }} ContainerID="47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" Namespace="calico-system" Pod="whisker-757f9d6b7c-hhljw" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-" Aug 13 00:43:38.859490 containerd[1580]: 2025-08-13 00:43:38.729 [INFO][3926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" Namespace="calico-system" Pod="whisker-757f9d6b7c-hhljw" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0" Aug 13 00:43:38.859490 containerd[1580]: 2025-08-13 00:43:38.766 [INFO][3938] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" HandleID="k8s-pod-network.47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0" Aug 13 00:43:38.860137 containerd[1580]: 2025-08-13 00:43:38.767 [INFO][3938] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" HandleID="k8s-pod-network.47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", "pod":"whisker-757f9d6b7c-hhljw", "timestamp":"2025-08-13 00:43:38.766873865 +0000 UTC"}, Hostname:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:43:38.860137 containerd[1580]: 2025-08-13 00:43:38.767 [INFO][3938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:38.860137 containerd[1580]: 2025-08-13 00:43:38.767 [INFO][3938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:38.860137 containerd[1580]: 2025-08-13 00:43:38.767 [INFO][3938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal' Aug 13 00:43:38.860137 containerd[1580]: 2025-08-13 00:43:38.779 [INFO][3938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:38.860137 containerd[1580]: 2025-08-13 00:43:38.787 [INFO][3938] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:38.860137 containerd[1580]: 2025-08-13 00:43:38.793 [INFO][3938] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:38.860137 containerd[1580]: 2025-08-13 00:43:38.796 [INFO][3938] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:38.860571 containerd[1580]: 2025-08-13 00:43:38.799 [INFO][3938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:38.860571 containerd[1580]: 2025-08-13 00:43:38.799 [INFO][3938] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:38.860571 containerd[1580]: 2025-08-13 00:43:38.801 [INFO][3938] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65 Aug 13 00:43:38.860571 containerd[1580]: 2025-08-13 00:43:38.806 [INFO][3938] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:38.860571 containerd[1580]: 2025-08-13 00:43:38.816 [INFO][3938] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.65/26] block=192.168.62.64/26 handle="k8s-pod-network.47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:38.860571 containerd[1580]: 2025-08-13 00:43:38.817 [INFO][3938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.65/26] handle="k8s-pod-network.47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:38.860571 containerd[1580]: 2025-08-13 00:43:38.817 [INFO][3938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:38.860571 containerd[1580]: 2025-08-13 00:43:38.817 [INFO][3938] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.65/26] IPv6=[] ContainerID="47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" HandleID="k8s-pod-network.47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0" Aug 13 00:43:38.860982 containerd[1580]: 2025-08-13 00:43:38.821 [INFO][3926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" Namespace="calico-system" Pod="whisker-757f9d6b7c-hhljw" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0", GenerateName:"whisker-757f9d6b7c-", Namespace:"calico-system", SelfLink:"", UID:"5fdbf962-07b6-454c-9fe2-555279763358", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"757f9d6b7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-757f9d6b7c-hhljw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09acf21f181", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:38.861111 containerd[1580]: 2025-08-13 00:43:38.821 [INFO][3926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.65/32] ContainerID="47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" Namespace="calico-system" Pod="whisker-757f9d6b7c-hhljw" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0" Aug 13 00:43:38.861111 containerd[1580]: 2025-08-13 00:43:38.821 [INFO][3926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09acf21f181 ContainerID="47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" Namespace="calico-system" Pod="whisker-757f9d6b7c-hhljw" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0" Aug 13 00:43:38.861111 containerd[1580]: 2025-08-13 00:43:38.834 [INFO][3926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" Namespace="calico-system" Pod="whisker-757f9d6b7c-hhljw" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0" Aug 13 00:43:38.861250 containerd[1580]: 2025-08-13 00:43:38.834 [INFO][3926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" Namespace="calico-system" Pod="whisker-757f9d6b7c-hhljw" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0", GenerateName:"whisker-757f9d6b7c-", Namespace:"calico-system", SelfLink:"", UID:"5fdbf962-07b6-454c-9fe2-555279763358", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"757f9d6b7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65", Pod:"whisker-757f9d6b7c-hhljw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09acf21f181", MAC:"82:59:82:62:ae:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:38.861362 containerd[1580]: 2025-08-13 00:43:38.856 [INFO][3926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" Namespace="calico-system" Pod="whisker-757f9d6b7c-hhljw" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-whisker--757f9d6b7c--hhljw-eth0" Aug 13 00:43:38.895721 containerd[1580]: time="2025-08-13T00:43:38.895434187Z" level=info msg="connecting to shim 47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65" address="unix:///run/containerd/s/07ab798ff95f73234d7d662e31d8cc02ca22f70c0d11f7375e3bb392eb1c31fb" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:38.936468 systemd[1]: Started cri-containerd-47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65.scope - libcontainer container 47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65. Aug 13 00:43:39.011244 containerd[1580]: time="2025-08-13T00:43:39.011094054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757f9d6b7c-hhljw,Uid:5fdbf962-07b6-454c-9fe2-555279763358,Namespace:calico-system,Attempt:0,} returns sandbox id \"47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65\"" Aug 13 00:43:39.014968 containerd[1580]: time="2025-08-13T00:43:39.014907123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:43:39.546783 kubelet[2840]: I0813 00:43:39.546045 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:43:39.949714 kubelet[2840]: I0813 00:43:39.949642 2840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1785cf06-2712-4bdb-974e-6b234b5e14b3" path="/var/lib/kubelet/pods/1785cf06-2712-4bdb-974e-6b234b5e14b3/volumes" Aug 13 00:43:40.093656 containerd[1580]: time="2025-08-13T00:43:40.093508964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:40.098527 containerd[1580]: time="2025-08-13T00:43:40.097660360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 00:43:40.100958 containerd[1580]: time="2025-08-13T00:43:40.100272231Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:40.109237 containerd[1580]: time="2025-08-13T00:43:40.109176794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:40.110555 containerd[1580]: time="2025-08-13T00:43:40.110512677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.095550659s" Aug 13 00:43:40.111653 containerd[1580]: time="2025-08-13T00:43:40.111618491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 00:43:40.121011 containerd[1580]: time="2025-08-13T00:43:40.120970247Z" level=info msg="CreateContainer within sandbox \"47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:43:40.141905 containerd[1580]: time="2025-08-13T00:43:40.141824223Z" level=info msg="Container e76c66d214841253501a941fbafda8433b506b39f3453276d3b80f3006195653: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:40.149537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524175817.mount: Deactivated successfully. Aug 13 00:43:40.163246 containerd[1580]: time="2025-08-13T00:43:40.163182114Z" level=info msg="CreateContainer within sandbox \"47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e76c66d214841253501a941fbafda8433b506b39f3453276d3b80f3006195653\"" Aug 13 00:43:40.165238 containerd[1580]: time="2025-08-13T00:43:40.165201404Z" level=info msg="StartContainer for \"e76c66d214841253501a941fbafda8433b506b39f3453276d3b80f3006195653\"" Aug 13 00:43:40.168832 containerd[1580]: time="2025-08-13T00:43:40.168792994Z" level=info msg="connecting to shim e76c66d214841253501a941fbafda8433b506b39f3453276d3b80f3006195653" address="unix:///run/containerd/s/07ab798ff95f73234d7d662e31d8cc02ca22f70c0d11f7375e3bb392eb1c31fb" protocol=ttrpc version=3 Aug 13 00:43:40.235015 systemd[1]: Started cri-containerd-e76c66d214841253501a941fbafda8433b506b39f3453276d3b80f3006195653.scope - libcontainer container e76c66d214841253501a941fbafda8433b506b39f3453276d3b80f3006195653. Aug 13 00:43:40.352091 containerd[1580]: time="2025-08-13T00:43:40.352025523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979\" id:\"dbf8a7065fe0268041c7a42eaecb2c2486e5a631201cbbabd31854dcdc5406b9\" pid:4032 exit_status:1 exited_at:{seconds:1755045820 nanos:350875135}" Aug 13 00:43:40.430071 containerd[1580]: time="2025-08-13T00:43:40.429852182Z" level=info msg="StartContainer for \"e76c66d214841253501a941fbafda8433b506b39f3453276d3b80f3006195653\" returns successfully" Aug 13 00:43:40.436199 containerd[1580]: time="2025-08-13T00:43:40.436144262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:43:40.580540 containerd[1580]: time="2025-08-13T00:43:40.580383116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979\" id:\"a98e0327d56801f0300f882c39cc1e4f64892c85cd7acd630f8f711435103f2a\" pid:4155 exit_status:1 exited_at:{seconds:1755045820 nanos:579990090}" Aug 13 00:43:40.725191 systemd-networkd[1477]: cali09acf21f181: Gained IPv6LL Aug 13 00:43:41.944966 containerd[1580]: time="2025-08-13T00:43:41.944402091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xchqv,Uid:77bef1ca-58c6-4825-816c-72d9c90d0fe4,Namespace:kube-system,Attempt:0,}" Aug 13 00:43:41.947860 containerd[1580]: time="2025-08-13T00:43:41.947802901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6f6c8bbc-bxg4r,Uid:7255bc01-9448-4cab-923b-0cf690e45c02,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:43:41.948197 containerd[1580]: time="2025-08-13T00:43:41.948054761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bd578d96-ndvjd,Uid:ea81321d-5f1d-4834-988c-9415b7d9f826,Namespace:calico-system,Attempt:0,}" Aug 13 00:43:42.339436 systemd-networkd[1477]: calicdd19a90d27: Link UP Aug 13 00:43:42.341934 systemd-networkd[1477]: calicdd19a90d27: Gained carrier Aug 13 00:43:42.379795 containerd[1580]: 2025-08-13 00:43:42.072 [INFO][4205] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:43:42.379795 containerd[1580]: 2025-08-13 00:43:42.114 [INFO][4205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0 coredns-674b8bbfcf- kube-system 77bef1ca-58c6-4825-816c-72d9c90d0fe4 822 0 2025-08-13 00:43:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal coredns-674b8bbfcf-xchqv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicdd19a90d27 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xchqv" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-" Aug 13 00:43:42.379795 containerd[1580]: 2025-08-13 00:43:42.114 [INFO][4205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xchqv" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0" Aug 13 00:43:42.379795 containerd[1580]: 2025-08-13 00:43:42.226 [INFO][4242] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" HandleID="k8s-pod-network.244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0" Aug 13 00:43:42.380223 containerd[1580]: 2025-08-13 00:43:42.233 [INFO][4242] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" HandleID="k8s-pod-network.244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000343b90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-xchqv", "timestamp":"2025-08-13 00:43:42.226343863 +0000 UTC"}, Hostname:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:43:42.380223 containerd[1580]: 2025-08-13 00:43:42.233 [INFO][4242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:42.380223 containerd[1580]: 2025-08-13 00:43:42.234 [INFO][4242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:42.380223 containerd[1580]: 2025-08-13 00:43:42.234 [INFO][4242] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal' Aug 13 00:43:42.380223 containerd[1580]: 2025-08-13 00:43:42.260 [INFO][4242] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.380223 containerd[1580]: 2025-08-13 00:43:42.273 [INFO][4242] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.380223 containerd[1580]: 2025-08-13 00:43:42.285 [INFO][4242] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.380223 containerd[1580]: 2025-08-13 00:43:42.290 [INFO][4242] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.380651 containerd[1580]: 2025-08-13 00:43:42.294 [INFO][4242] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.380651 containerd[1580]: 2025-08-13 00:43:42.294 [INFO][4242] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.380651 containerd[1580]: 2025-08-13 00:43:42.297 [INFO][4242] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b Aug 13 00:43:42.380651 containerd[1580]: 2025-08-13 00:43:42.305 [INFO][4242] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.380651 containerd[1580]: 2025-08-13 00:43:42.321 [INFO][4242] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.66/26] block=192.168.62.64/26 handle="k8s-pod-network.244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.380651 containerd[1580]: 2025-08-13 00:43:42.321 [INFO][4242] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.66/26] handle="k8s-pod-network.244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.380651 containerd[1580]: 2025-08-13 00:43:42.322 [INFO][4242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:42.380651 containerd[1580]: 2025-08-13 00:43:42.323 [INFO][4242] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.66/26] IPv6=[] ContainerID="244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" HandleID="k8s-pod-network.244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0" Aug 13 00:43:42.381048 containerd[1580]: 2025-08-13 00:43:42.330 [INFO][4205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xchqv" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"77bef1ca-58c6-4825-816c-72d9c90d0fe4", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-xchqv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdd19a90d27", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:42.381048 containerd[1580]: 2025-08-13 00:43:42.332 [INFO][4205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.66/32] ContainerID="244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xchqv" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0" Aug 13 00:43:42.381048 containerd[1580]: 2025-08-13 00:43:42.332 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdd19a90d27 ContainerID="244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xchqv" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0" Aug 13 00:43:42.381048 containerd[1580]: 2025-08-13 00:43:42.344 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xchqv" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0" Aug 13 00:43:42.381048 containerd[1580]: 2025-08-13 00:43:42.346 [INFO][4205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xchqv" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"77bef1ca-58c6-4825-816c-72d9c90d0fe4", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b", Pod:"coredns-674b8bbfcf-xchqv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdd19a90d27", MAC:"1a:17:14:6e:8b:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:42.381048 containerd[1580]: 2025-08-13 00:43:42.375 [INFO][4205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xchqv" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--xchqv-eth0" Aug 13 00:43:42.455718 containerd[1580]: time="2025-08-13T00:43:42.454963730Z" level=info msg="connecting to shim 244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b" address="unix:///run/containerd/s/8795852103eeda2316ba6a128ef32a88bf348ffc74aeb376953391fa029568df" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:42.461524 kubelet[2840]: I0813 00:43:42.461474 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:43:42.514094 systemd-networkd[1477]: cali09c58d97dd3: Link UP Aug 13 00:43:42.521917 systemd-networkd[1477]: cali09c58d97dd3: Gained carrier Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.120 [INFO][4203] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.154 [INFO][4203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0 calico-kube-controllers-79bd578d96- calico-system ea81321d-5f1d-4834-988c-9415b7d9f826 821 0 2025-08-13 00:43:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79bd578d96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal calico-kube-controllers-79bd578d96-ndvjd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali09c58d97dd3 [] [] }} ContainerID="9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" Namespace="calico-system" Pod="calico-kube-controllers-79bd578d96-ndvjd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.154 [INFO][4203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" Namespace="calico-system" Pod="calico-kube-controllers-79bd578d96-ndvjd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.267 [INFO][4248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" HandleID="k8s-pod-network.9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.267 [INFO][4248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" HandleID="k8s-pod-network.9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", "pod":"calico-kube-controllers-79bd578d96-ndvjd", "timestamp":"2025-08-13 00:43:42.266350812 +0000 UTC"}, Hostname:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.267 [INFO][4248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.321 [INFO][4248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.322 [INFO][4248] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal' Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.366 [INFO][4248] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.390 [INFO][4248] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.400 [INFO][4248] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.408 [INFO][4248] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.415 [INFO][4248] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.418 [INFO][4248] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.431 [INFO][4248] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.448 [INFO][4248] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.467 [INFO][4248] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.67/26] block=192.168.62.64/26 handle="k8s-pod-network.9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.468 [INFO][4248] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.67/26] handle="k8s-pod-network.9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.469 [INFO][4248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:42.615154 containerd[1580]: 2025-08-13 00:43:42.469 [INFO][4248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.67/26] IPv6=[] ContainerID="9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" HandleID="k8s-pod-network.9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0" Aug 13 00:43:42.618402 containerd[1580]: 2025-08-13 00:43:42.489 [INFO][4203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" Namespace="calico-system" Pod="calico-kube-controllers-79bd578d96-ndvjd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0", GenerateName:"calico-kube-controllers-79bd578d96-", Namespace:"calico-system", SelfLink:"", UID:"ea81321d-5f1d-4834-988c-9415b7d9f826", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79bd578d96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-79bd578d96-ndvjd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09c58d97dd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:42.618402 containerd[1580]: 2025-08-13 00:43:42.492 [INFO][4203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.67/32] ContainerID="9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" Namespace="calico-system" Pod="calico-kube-controllers-79bd578d96-ndvjd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0" Aug 13 00:43:42.618402 containerd[1580]: 2025-08-13 00:43:42.494 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09c58d97dd3 ContainerID="9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" Namespace="calico-system" Pod="calico-kube-controllers-79bd578d96-ndvjd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0" Aug 13 00:43:42.618402 containerd[1580]: 2025-08-13 00:43:42.551 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" Namespace="calico-system" Pod="calico-kube-controllers-79bd578d96-ndvjd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0" Aug 13 00:43:42.618402 containerd[1580]: 2025-08-13 00:43:42.565 [INFO][4203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" Namespace="calico-system" Pod="calico-kube-controllers-79bd578d96-ndvjd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0", GenerateName:"calico-kube-controllers-79bd578d96-", Namespace:"calico-system", SelfLink:"", UID:"ea81321d-5f1d-4834-988c-9415b7d9f826", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79bd578d96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed", Pod:"calico-kube-controllers-79bd578d96-ndvjd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09c58d97dd3", MAC:"3e:40:69:90:8f:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:42.618402 containerd[1580]: 2025-08-13 00:43:42.599 [INFO][4203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" Namespace="calico-system" Pod="calico-kube-controllers-79bd578d96-ndvjd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--kube--controllers--79bd578d96--ndvjd-eth0" Aug 13 00:43:42.665974 systemd[1]: Started cri-containerd-244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b.scope - libcontainer container 244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b. Aug 13 00:43:42.713757 systemd-networkd[1477]: calicf2d39eacf9: Link UP Aug 13 00:43:42.715503 systemd-networkd[1477]: calicf2d39eacf9: Gained carrier Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.142 [INFO][4215] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.184 [INFO][4215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0 calico-apiserver-7d6f6c8bbc- calico-apiserver 7255bc01-9448-4cab-923b-0cf690e45c02 820 0 2025-08-13 00:43:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d6f6c8bbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal calico-apiserver-7d6f6c8bbc-bxg4r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicf2d39eacf9 [] [] }} ContainerID="aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-bxg4r" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.184 [INFO][4215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-bxg4r" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.277 [INFO][4253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" HandleID="k8s-pod-network.aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.278 [INFO][4253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" HandleID="k8s-pod-network.aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003323c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", "pod":"calico-apiserver-7d6f6c8bbc-bxg4r", "timestamp":"2025-08-13 00:43:42.277668976 +0000 UTC"}, Hostname:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.278 [INFO][4253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.469 [INFO][4253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.470 [INFO][4253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal' Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.562 [INFO][4253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.585 [INFO][4253] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.602 [INFO][4253] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.608 [INFO][4253] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.615 [INFO][4253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.618 [INFO][4253] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.625 [INFO][4253] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.639 [INFO][4253] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.662 [INFO][4253] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.68/26] block=192.168.62.64/26 handle="k8s-pod-network.aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.663 [INFO][4253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.68/26] handle="k8s-pod-network.aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.664 [INFO][4253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:42.762986 containerd[1580]: 2025-08-13 00:43:42.665 [INFO][4253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.68/26] IPv6=[] ContainerID="aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" HandleID="k8s-pod-network.aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0" Aug 13 00:43:42.767193 containerd[1580]: 2025-08-13 00:43:42.686 [INFO][4215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-bxg4r" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0", GenerateName:"calico-apiserver-7d6f6c8bbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7255bc01-9448-4cab-923b-0cf690e45c02", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6f6c8bbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7d6f6c8bbc-bxg4r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf2d39eacf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:42.767193 containerd[1580]: 2025-08-13 00:43:42.698 [INFO][4215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.68/32] ContainerID="aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-bxg4r" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0" Aug 13 00:43:42.767193 containerd[1580]: 2025-08-13 00:43:42.698 [INFO][4215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf2d39eacf9 ContainerID="aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-bxg4r" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0" Aug 13 00:43:42.767193 containerd[1580]: 2025-08-13 00:43:42.716 [INFO][4215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-bxg4r" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0" Aug 13 00:43:42.767193 containerd[1580]: 2025-08-13 00:43:42.718 [INFO][4215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-bxg4r" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0", GenerateName:"calico-apiserver-7d6f6c8bbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7255bc01-9448-4cab-923b-0cf690e45c02", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6f6c8bbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c", Pod:"calico-apiserver-7d6f6c8bbc-bxg4r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf2d39eacf9", MAC:"86:9b:67:67:43:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:42.767193 containerd[1580]: 2025-08-13 00:43:42.753 [INFO][4215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-bxg4r" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--bxg4r-eth0" Aug 13 00:43:42.806322 containerd[1580]: time="2025-08-13T00:43:42.804289448Z" level=info msg="connecting to shim 9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed" address="unix:///run/containerd/s/90c0c43cbbd2d211448cb7714f05afd20b8552a9fbc01f7d283bdedb3b71a3d3" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:42.845244 containerd[1580]: time="2025-08-13T00:43:42.845115987Z" level=info msg="connecting to shim aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c" address="unix:///run/containerd/s/48b3b3275d08a59291071abedaaf0614aec9b67ef7303145ca3f252b22dd8052" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:42.945216 containerd[1580]: time="2025-08-13T00:43:42.945047891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vklsd,Uid:b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2,Namespace:calico-system,Attempt:0,}" Aug 13 00:43:42.946732 containerd[1580]: time="2025-08-13T00:43:42.946423548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6f6c8bbc-5tds7,Uid:7427794f-df0b-4eef-ab46-3e54c32f9a51,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:43:42.994058 systemd[1]: Started cri-containerd-aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c.scope - libcontainer container aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c. Aug 13 00:43:43.011633 containerd[1580]: time="2025-08-13T00:43:43.011555329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xchqv,Uid:77bef1ca-58c6-4825-816c-72d9c90d0fe4,Namespace:kube-system,Attempt:0,} returns sandbox id \"244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b\"" Aug 13 00:43:43.028131 containerd[1580]: time="2025-08-13T00:43:43.027971693Z" level=info msg="CreateContainer within sandbox \"244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:43:43.099980 systemd[1]: Started cri-containerd-9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed.scope - libcontainer container 9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed. Aug 13 00:43:43.142491 containerd[1580]: time="2025-08-13T00:43:43.142413946Z" level=info msg="Container cded8b6775502c659ba65811c06a94f10ef6e32cc68bc5070b39a4b4b7fc8796: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:43.191095 containerd[1580]: time="2025-08-13T00:43:43.190858999Z" level=info msg="CreateContainer within sandbox \"244b42fd880b68b932e1b0dae5e96fc3b97d64953bde9f91e283dad9b66c2e7b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cded8b6775502c659ba65811c06a94f10ef6e32cc68bc5070b39a4b4b7fc8796\"" Aug 13 00:43:43.193876 containerd[1580]: time="2025-08-13T00:43:43.192681373Z" level=info msg="StartContainer for \"cded8b6775502c659ba65811c06a94f10ef6e32cc68bc5070b39a4b4b7fc8796\"" Aug 13 00:43:43.204348 containerd[1580]: time="2025-08-13T00:43:43.203766137Z" level=info msg="connecting to shim cded8b6775502c659ba65811c06a94f10ef6e32cc68bc5070b39a4b4b7fc8796" address="unix:///run/containerd/s/8795852103eeda2316ba6a128ef32a88bf348ffc74aeb376953391fa029568df" protocol=ttrpc version=3 Aug 13 00:43:43.321205 systemd[1]: Started cri-containerd-cded8b6775502c659ba65811c06a94f10ef6e32cc68bc5070b39a4b4b7fc8796.scope - libcontainer container cded8b6775502c659ba65811c06a94f10ef6e32cc68bc5070b39a4b4b7fc8796. Aug 13 00:43:43.476826 containerd[1580]: time="2025-08-13T00:43:43.476596052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6f6c8bbc-bxg4r,Uid:7255bc01-9448-4cab-923b-0cf690e45c02,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c\"" Aug 13 00:43:43.557494 containerd[1580]: time="2025-08-13T00:43:43.556371917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bd578d96-ndvjd,Uid:ea81321d-5f1d-4834-988c-9415b7d9f826,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed\"" Aug 13 00:43:43.621516 containerd[1580]: time="2025-08-13T00:43:43.621421839Z" level=info msg="StartContainer for \"cded8b6775502c659ba65811c06a94f10ef6e32cc68bc5070b39a4b4b7fc8796\" returns successfully" Aug 13 00:43:43.714757 systemd-networkd[1477]: cali8182fdb10af: Link UP Aug 13 00:43:43.717098 systemd-networkd[1477]: cali8182fdb10af: Gained carrier Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.150 [INFO][4397] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.194 [INFO][4397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0 goldmane-768f4c5c69- calico-system b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2 823 0 2025-08-13 00:43:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal goldmane-768f4c5c69-vklsd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8182fdb10af [] [] }} ContainerID="e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" Namespace="calico-system" Pod="goldmane-768f4c5c69-vklsd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.194 [INFO][4397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" Namespace="calico-system" Pod="goldmane-768f4c5c69-vklsd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.446 [INFO][4452] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" HandleID="k8s-pod-network.e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.447 [INFO][4452] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" HandleID="k8s-pod-network.e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000320150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", "pod":"goldmane-768f4c5c69-vklsd", "timestamp":"2025-08-13 00:43:43.4462155 +0000 UTC"}, Hostname:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.448 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.448 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.448 [INFO][4452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal' Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.505 [INFO][4452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.537 [INFO][4452] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.578 [INFO][4452] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.601 [INFO][4452] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.629 [INFO][4452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.634 [INFO][4452] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.638 [INFO][4452] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.653 [INFO][4452] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.678 [INFO][4452] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.69/26] block=192.168.62.64/26 handle="k8s-pod-network.e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.678 [INFO][4452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.69/26] handle="k8s-pod-network.e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.680 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:43.795914 containerd[1580]: 2025-08-13 00:43:43.681 [INFO][4452] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.69/26] IPv6=[] ContainerID="e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" HandleID="k8s-pod-network.e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0" Aug 13 00:43:43.798818 containerd[1580]: 2025-08-13 00:43:43.695 [INFO][4397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" Namespace="calico-system" Pod="goldmane-768f4c5c69-vklsd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-768f4c5c69-vklsd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8182fdb10af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:43.798818 containerd[1580]: 2025-08-13 00:43:43.697 [INFO][4397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.69/32] ContainerID="e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" Namespace="calico-system" Pod="goldmane-768f4c5c69-vklsd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0" Aug 13 00:43:43.798818 containerd[1580]: 2025-08-13 00:43:43.698 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8182fdb10af ContainerID="e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" Namespace="calico-system" Pod="goldmane-768f4c5c69-vklsd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0" Aug 13 00:43:43.798818 containerd[1580]: 2025-08-13 00:43:43.721 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" Namespace="calico-system" Pod="goldmane-768f4c5c69-vklsd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0" Aug 13 00:43:43.798818 containerd[1580]: 2025-08-13 00:43:43.725 [INFO][4397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" Namespace="calico-system" Pod="goldmane-768f4c5c69-vklsd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc", Pod:"goldmane-768f4c5c69-vklsd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8182fdb10af", MAC:"36:9f:86:d8:96:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:43.798818 containerd[1580]: 2025-08-13 00:43:43.789 [INFO][4397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" Namespace="calico-system" Pod="goldmane-768f4c5c69-vklsd" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--vklsd-eth0" Aug 13 00:43:43.798879 systemd-networkd[1477]: calicdd19a90d27: Gained IPv6LL Aug 13 00:43:43.909381 containerd[1580]: time="2025-08-13T00:43:43.909320439Z" level=info msg="connecting to shim e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc" address="unix:///run/containerd/s/15575405deac0ae8c3756b410274f818a6fc5da354b3582fbdbe1142f37ead36" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:43.967763 containerd[1580]: time="2025-08-13T00:43:43.966295198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnp88,Uid:40a52536-acd7-4e83-951d-9cdacd5200e1,Namespace:calico-system,Attempt:0,}" Aug 13 00:43:43.972266 containerd[1580]: time="2025-08-13T00:43:43.968432565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nsvj6,Uid:d5677ecd-db25-4716-b13e-6e82b6dd2619,Namespace:kube-system,Attempt:0,}" Aug 13 00:43:43.989506 systemd-networkd[1477]: cali09c58d97dd3: Gained IPv6LL Aug 13 00:43:44.042893 systemd-networkd[1477]: cali158ba873e69: Link UP Aug 13 00:43:44.045597 systemd-networkd[1477]: cali158ba873e69: Gained carrier Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.342 [INFO][4414] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.427 [INFO][4414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0 calico-apiserver-7d6f6c8bbc- calico-apiserver 7427794f-df0b-4eef-ab46-3e54c32f9a51 818 0 2025-08-13 00:43:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d6f6c8bbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal calico-apiserver-7d6f6c8bbc-5tds7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali158ba873e69 [] [] }} ContainerID="cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-5tds7" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.428 [INFO][4414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-5tds7" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.575 [INFO][4491] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" HandleID="k8s-pod-network.cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.576 [INFO][4491] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" HandleID="k8s-pod-network.cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000321640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", "pod":"calico-apiserver-7d6f6c8bbc-5tds7", "timestamp":"2025-08-13 00:43:43.575647686 +0000 UTC"}, Hostname:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.576 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.679 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.679 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal' Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.721 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.744 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.795 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.837 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.865 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.866 [INFO][4491] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.872 [INFO][4491] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88 Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.894 [INFO][4491] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.950 [INFO][4491] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.70/26] block=192.168.62.64/26 handle="k8s-pod-network.cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.950 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.70/26] handle="k8s-pod-network.cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.950 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:44.119047 containerd[1580]: 2025-08-13 00:43:43.950 [INFO][4491] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.70/26] IPv6=[] ContainerID="cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" HandleID="k8s-pod-network.cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0" Aug 13 00:43:44.122662 containerd[1580]: 2025-08-13 00:43:43.969 [INFO][4414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-5tds7" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0", GenerateName:"calico-apiserver-7d6f6c8bbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7427794f-df0b-4eef-ab46-3e54c32f9a51", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6f6c8bbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7d6f6c8bbc-5tds7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali158ba873e69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:44.122662 containerd[1580]: 2025-08-13 00:43:43.969 [INFO][4414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.70/32] ContainerID="cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-5tds7" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0" Aug 13 00:43:44.122662 containerd[1580]: 2025-08-13 00:43:43.969 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali158ba873e69 ContainerID="cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-5tds7" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0" Aug 13 00:43:44.122662 containerd[1580]: 2025-08-13 00:43:44.048 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-5tds7" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0" Aug 13 00:43:44.122662 containerd[1580]: 2025-08-13 00:43:44.053 [INFO][4414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-5tds7" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0", GenerateName:"calico-apiserver-7d6f6c8bbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7427794f-df0b-4eef-ab46-3e54c32f9a51", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6f6c8bbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88", Pod:"calico-apiserver-7d6f6c8bbc-5tds7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali158ba873e69", MAC:"5a:c6:ae:84:ab:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:44.122662 containerd[1580]: 2025-08-13 00:43:44.088 [INFO][4414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" Namespace="calico-apiserver" Pod="calico-apiserver-7d6f6c8bbc-5tds7" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-calico--apiserver--7d6f6c8bbc--5tds7-eth0" Aug 13 00:43:44.153425 systemd[1]: Started cri-containerd-e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc.scope - libcontainer container e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc. Aug 13 00:43:44.307819 containerd[1580]: time="2025-08-13T00:43:44.307308874Z" level=info msg="connecting to shim cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88" address="unix:///run/containerd/s/ed9680eba10deab305f535aaa46d8f17c26d40e7fb38377d234fa6d45445e274" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:44.419751 kubelet[2840]: I0813 00:43:44.419506 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xchqv" podStartSLOduration=43.419480435 podStartE2EDuration="43.419480435s" podCreationTimestamp="2025-08-13 00:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:43:44.418053077 +0000 UTC m=+46.685497948" watchObservedRunningTime="2025-08-13 00:43:44.419480435 +0000 UTC m=+46.686925305" Aug 13 00:43:44.513121 systemd[1]: Started cri-containerd-cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88.scope - libcontainer container cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88. Aug 13 00:43:44.539537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592061119.mount: Deactivated successfully. Aug 13 00:43:44.579299 containerd[1580]: time="2025-08-13T00:43:44.578788095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:44.583388 containerd[1580]: time="2025-08-13T00:43:44.583320791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 00:43:44.584815 containerd[1580]: time="2025-08-13T00:43:44.584702065Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:44.592226 containerd[1580]: time="2025-08-13T00:43:44.592166922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:44.611611 containerd[1580]: time="2025-08-13T00:43:44.611544434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.175343861s" Aug 13 00:43:44.611611 containerd[1580]: time="2025-08-13T00:43:44.611612267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 00:43:44.615638 containerd[1580]: time="2025-08-13T00:43:44.614786134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:43:44.623772 containerd[1580]: time="2025-08-13T00:43:44.623252270Z" level=info msg="CreateContainer within sandbox \"47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:43:44.629200 systemd-networkd[1477]: calicf2d39eacf9: Gained IPv6LL Aug 13 00:43:44.650080 containerd[1580]: time="2025-08-13T00:43:44.649998420Z" level=info msg="Container d8f09c9bb231359128a3d65704c64af0ce51e900da001cd4a1a90bf3e1650e5e: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:44.666640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2384491151.mount: Deactivated successfully. Aug 13 00:43:44.688068 containerd[1580]: time="2025-08-13T00:43:44.687976176Z" level=info msg="CreateContainer within sandbox \"47d93e4452a44432774b0f8566b4454cf21804b0ab98eca2ad5012c60df8bf65\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d8f09c9bb231359128a3d65704c64af0ce51e900da001cd4a1a90bf3e1650e5e\"" Aug 13 00:43:44.691037 containerd[1580]: time="2025-08-13T00:43:44.689876966Z" level=info msg="StartContainer for \"d8f09c9bb231359128a3d65704c64af0ce51e900da001cd4a1a90bf3e1650e5e\"" Aug 13 00:43:44.695716 containerd[1580]: time="2025-08-13T00:43:44.695275888Z" level=info msg="connecting to shim d8f09c9bb231359128a3d65704c64af0ce51e900da001cd4a1a90bf3e1650e5e" address="unix:///run/containerd/s/07ab798ff95f73234d7d662e31d8cc02ca22f70c0d11f7375e3bb392eb1c31fb" protocol=ttrpc version=3 Aug 13 00:43:44.723884 systemd-networkd[1477]: caliaf49b6dfd45: Link UP Aug 13 00:43:44.728148 systemd-networkd[1477]: caliaf49b6dfd45: Gained carrier Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.385 [INFO][4560] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.450 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0 coredns-674b8bbfcf- kube-system d5677ecd-db25-4716-b13e-6e82b6dd2619 819 0 2025-08-13 00:43:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal coredns-674b8bbfcf-nsvj6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaf49b6dfd45 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsvj6" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.450 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsvj6" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.545 [INFO][4626] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" HandleID="k8s-pod-network.7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.547 [INFO][4626] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" HandleID="k8s-pod-network.7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000ffbe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-nsvj6", "timestamp":"2025-08-13 00:43:44.545649347 +0000 UTC"}, Hostname:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.547 [INFO][4626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.547 [INFO][4626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.547 [INFO][4626] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal' Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.573 [INFO][4626] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.585 [INFO][4626] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.611 [INFO][4626] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.620 [INFO][4626] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.628 [INFO][4626] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.629 [INFO][4626] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.632 [INFO][4626] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7 Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.645 [INFO][4626] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.680 [INFO][4626] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.71/26] block=192.168.62.64/26 handle="k8s-pod-network.7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.681 [INFO][4626] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.71/26] handle="k8s-pod-network.7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.683 [INFO][4626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:44.787426 containerd[1580]: 2025-08-13 00:43:44.684 [INFO][4626] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.71/26] IPv6=[] ContainerID="7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" HandleID="k8s-pod-network.7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0" Aug 13 00:43:44.790928 containerd[1580]: 2025-08-13 00:43:44.707 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsvj6" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d5677ecd-db25-4716-b13e-6e82b6dd2619", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-nsvj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf49b6dfd45", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:44.790928 containerd[1580]: 2025-08-13 00:43:44.710 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.71/32] ContainerID="7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsvj6" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0" Aug 13 00:43:44.790928 containerd[1580]: 2025-08-13 00:43:44.710 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf49b6dfd45 ContainerID="7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsvj6" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0" Aug 13 00:43:44.790928 containerd[1580]: 2025-08-13 00:43:44.723 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsvj6" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0" Aug 13 00:43:44.790928 containerd[1580]: 2025-08-13 00:43:44.734 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsvj6" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d5677ecd-db25-4716-b13e-6e82b6dd2619", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7", Pod:"coredns-674b8bbfcf-nsvj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf49b6dfd45", MAC:"3e:a9:ea:dd:44:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:44.790928 containerd[1580]: 2025-08-13 00:43:44.781 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsvj6" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--nsvj6-eth0" Aug 13 00:43:44.815024 systemd[1]: Started cri-containerd-d8f09c9bb231359128a3d65704c64af0ce51e900da001cd4a1a90bf3e1650e5e.scope - libcontainer container d8f09c9bb231359128a3d65704c64af0ce51e900da001cd4a1a90bf3e1650e5e. Aug 13 00:43:44.888102 systemd-networkd[1477]: cali1e3129d247f: Link UP Aug 13 00:43:44.891126 systemd-networkd[1477]: cali1e3129d247f: Gained carrier Aug 13 00:43:44.913417 containerd[1580]: time="2025-08-13T00:43:44.913330625Z" level=info msg="connecting to shim 7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7" address="unix:///run/containerd/s/455296be1270ab2fefefcef81fb74bbd6888ef6b9b450fb37ac09e2700284dac" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.297 [INFO][4556] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.375 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0 csi-node-driver- calico-system 40a52536-acd7-4e83-951d-9cdacd5200e1 708 0 2025-08-13 00:43:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal csi-node-driver-hnp88 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1e3129d247f [] [] }} ContainerID="00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" Namespace="calico-system" Pod="csi-node-driver-hnp88" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.375 [INFO][4556] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" Namespace="calico-system" Pod="csi-node-driver-hnp88" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.558 [INFO][4613] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" HandleID="k8s-pod-network.00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.558 [INFO][4613] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" HandleID="k8s-pod-network.00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001e6a30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", "pod":"csi-node-driver-hnp88", "timestamp":"2025-08-13 00:43:44.558035515 +0000 UTC"}, Hostname:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.558 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.681 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.681 [INFO][4613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal' Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.736 [INFO][4613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.751 [INFO][4613] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.771 [INFO][4613] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.795 [INFO][4613] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.807 [INFO][4613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.807 [INFO][4613] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.816 [INFO][4613] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3 Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.833 [INFO][4613] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.863 [INFO][4613] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.62.72/26] block=192.168.62.64/26 handle="k8s-pod-network.00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.864 [INFO][4613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.72/26] handle="k8s-pod-network.00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" host="ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal" Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.864 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:43:44.934246 containerd[1580]: 2025-08-13 00:43:44.864 [INFO][4613] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.72/26] IPv6=[] ContainerID="00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" HandleID="k8s-pod-network.00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" Workload="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0" Aug 13 00:43:44.937237 containerd[1580]: 2025-08-13 00:43:44.873 [INFO][4556] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" Namespace="calico-system" Pod="csi-node-driver-hnp88" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40a52536-acd7-4e83-951d-9cdacd5200e1", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-hnp88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e3129d247f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:44.937237 containerd[1580]: 2025-08-13 00:43:44.874 [INFO][4556] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.72/32] ContainerID="00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" Namespace="calico-system" Pod="csi-node-driver-hnp88" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0" Aug 13 00:43:44.937237 containerd[1580]: 2025-08-13 00:43:44.875 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e3129d247f ContainerID="00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" Namespace="calico-system" Pod="csi-node-driver-hnp88" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0" Aug 13 00:43:44.937237 containerd[1580]: 2025-08-13 00:43:44.895 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" Namespace="calico-system" Pod="csi-node-driver-hnp88" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0" Aug 13 00:43:44.937237 containerd[1580]: 2025-08-13 00:43:44.897 [INFO][4556] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" Namespace="calico-system" Pod="csi-node-driver-hnp88" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40a52536-acd7-4e83-951d-9cdacd5200e1", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-1-0-fd62537495e6f103d88c.c.flatcar-212911.internal", ContainerID:"00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3", Pod:"csi-node-driver-hnp88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e3129d247f", MAC:"da:ce:92:91:e2:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:43:44.937237 containerd[1580]: 2025-08-13 00:43:44.924 [INFO][4556] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" Namespace="calico-system" Pod="csi-node-driver-hnp88" WorkloadEndpoint="ci--4372--1--0--fd62537495e6f103d88c.c.flatcar--212911.internal-k8s-csi--node--driver--hnp88-eth0" Aug 13 00:43:44.975119 containerd[1580]: time="2025-08-13T00:43:44.974983147Z" level=info msg="connecting to shim 00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3" address="unix:///run/containerd/s/4c4e2b4e1a1cc75052a0d712a4b9f438976c63de985f07e965f67e7761a1ee1d" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:43:45.003454 systemd[1]: Started cri-containerd-7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7.scope - libcontainer container 7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7. Aug 13 00:43:45.063011 systemd[1]: Started cri-containerd-00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3.scope - libcontainer container 00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3. Aug 13 00:43:45.136443 containerd[1580]: time="2025-08-13T00:43:45.136381031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nsvj6,Uid:d5677ecd-db25-4716-b13e-6e82b6dd2619,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7\"" Aug 13 00:43:45.157064 containerd[1580]: time="2025-08-13T00:43:45.156842535Z" level=info msg="CreateContainer within sandbox \"7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:43:45.177859 containerd[1580]: time="2025-08-13T00:43:45.176375316Z" level=info msg="Container 01f763cd9c33e9af81396059496c61439e1bf67dfd7fd1abb1e1ba7f998e72b3: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:45.186371 containerd[1580]: time="2025-08-13T00:43:45.186310201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnp88,Uid:40a52536-acd7-4e83-951d-9cdacd5200e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3\"" Aug 13 00:43:45.223919 containerd[1580]: time="2025-08-13T00:43:45.223720710Z" level=info msg="CreateContainer within sandbox \"7b0efa6bf08739912a5593b64457e72f6a816a1ee2e3902b9085afe42ab0f9c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01f763cd9c33e9af81396059496c61439e1bf67dfd7fd1abb1e1ba7f998e72b3\"" Aug 13 00:43:45.233596 containerd[1580]: time="2025-08-13T00:43:45.233516894Z" level=info msg="StartContainer for \"01f763cd9c33e9af81396059496c61439e1bf67dfd7fd1abb1e1ba7f998e72b3\"" Aug 13 00:43:45.240125 containerd[1580]: time="2025-08-13T00:43:45.240049879Z" level=info msg="connecting to shim 01f763cd9c33e9af81396059496c61439e1bf67dfd7fd1abb1e1ba7f998e72b3" address="unix:///run/containerd/s/455296be1270ab2fefefcef81fb74bbd6888ef6b9b450fb37ac09e2700284dac" protocol=ttrpc version=3 Aug 13 00:43:45.292294 containerd[1580]: time="2025-08-13T00:43:45.290932096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vklsd,Uid:b84d4379-b5a4-4f26-b0c2-e2f82ba95fd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc\"" Aug 13 00:43:45.318852 containerd[1580]: time="2025-08-13T00:43:45.318773033Z" level=info msg="StartContainer for \"d8f09c9bb231359128a3d65704c64af0ce51e900da001cd4a1a90bf3e1650e5e\" returns successfully" Aug 13 00:43:45.333502 systemd-networkd[1477]: cali158ba873e69: Gained IPv6LL Aug 13 00:43:45.339017 systemd[1]: Started cri-containerd-01f763cd9c33e9af81396059496c61439e1bf67dfd7fd1abb1e1ba7f998e72b3.scope - libcontainer container 01f763cd9c33e9af81396059496c61439e1bf67dfd7fd1abb1e1ba7f998e72b3. Aug 13 00:43:45.432782 kubelet[2840]: I0813 00:43:45.431619 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-757f9d6b7c-hhljw" podStartSLOduration=1.8304378460000001 podStartE2EDuration="7.431292411s" podCreationTimestamp="2025-08-13 00:43:38 +0000 UTC" firstStartedPulling="2025-08-13 00:43:39.013217205 +0000 UTC m=+41.280662065" lastFinishedPulling="2025-08-13 00:43:44.614071787 +0000 UTC m=+46.881516630" observedRunningTime="2025-08-13 00:43:45.424302373 +0000 UTC m=+47.691747244" watchObservedRunningTime="2025-08-13 00:43:45.431292411 +0000 UTC m=+47.698737281" Aug 13 00:43:45.457322 containerd[1580]: time="2025-08-13T00:43:45.457253141Z" level=info msg="StartContainer for \"01f763cd9c33e9af81396059496c61439e1bf67dfd7fd1abb1e1ba7f998e72b3\" returns successfully" Aug 13 00:43:45.589866 systemd-networkd[1477]: cali8182fdb10af: Gained IPv6LL Aug 13 00:43:45.733214 containerd[1580]: time="2025-08-13T00:43:45.732637891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6f6c8bbc-5tds7,Uid:7427794f-df0b-4eef-ab46-3e54c32f9a51,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88\"" Aug 13 00:43:46.421131 systemd-networkd[1477]: caliaf49b6dfd45: Gained IPv6LL Aug 13 00:43:46.471009 kubelet[2840]: I0813 00:43:46.470792 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nsvj6" podStartSLOduration=45.470767781 podStartE2EDuration="45.470767781s" podCreationTimestamp="2025-08-13 00:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:43:46.4690798 +0000 UTC m=+48.736524685" watchObservedRunningTime="2025-08-13 00:43:46.470767781 +0000 UTC m=+48.738212655" Aug 13 00:43:46.614419 systemd-networkd[1477]: cali1e3129d247f: Gained IPv6LL Aug 13 00:43:47.109393 systemd-networkd[1477]: vxlan.calico: Link UP Aug 13 00:43:47.109407 systemd-networkd[1477]: vxlan.calico: Gained carrier Aug 13 00:43:48.433660 containerd[1580]: time="2025-08-13T00:43:48.433588136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:48.435116 containerd[1580]: time="2025-08-13T00:43:48.435045972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 00:43:48.436640 containerd[1580]: time="2025-08-13T00:43:48.436552007Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:48.439488 containerd[1580]: time="2025-08-13T00:43:48.439415836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:48.440711 containerd[1580]: time="2025-08-13T00:43:48.440532519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.825522859s" Aug 13 00:43:48.440711 containerd[1580]: time="2025-08-13T00:43:48.440579878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:43:48.442798 containerd[1580]: time="2025-08-13T00:43:48.442755354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:43:48.447145 containerd[1580]: time="2025-08-13T00:43:48.447010623Z" level=info msg="CreateContainer within sandbox \"aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:43:48.460726 containerd[1580]: time="2025-08-13T00:43:48.458881404Z" level=info msg="Container 608e16b0170cdf9f465cca4650a7d2df7dcd489103c4e84aa67ee3b156d56cc0: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:48.474717 containerd[1580]: time="2025-08-13T00:43:48.474653309Z" level=info msg="CreateContainer within sandbox \"aadbf1882c1c626c74a322688cfeb71cbe1b45be2e118139d6fe125303ce9a2c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"608e16b0170cdf9f465cca4650a7d2df7dcd489103c4e84aa67ee3b156d56cc0\"" Aug 13 00:43:48.475817 containerd[1580]: time="2025-08-13T00:43:48.475468975Z" level=info msg="StartContainer for \"608e16b0170cdf9f465cca4650a7d2df7dcd489103c4e84aa67ee3b156d56cc0\"" Aug 13 00:43:48.477402 containerd[1580]: time="2025-08-13T00:43:48.477341748Z" level=info msg="connecting to shim 608e16b0170cdf9f465cca4650a7d2df7dcd489103c4e84aa67ee3b156d56cc0" address="unix:///run/containerd/s/48b3b3275d08a59291071abedaaf0614aec9b67ef7303145ca3f252b22dd8052" protocol=ttrpc version=3 Aug 13 00:43:48.515903 systemd[1]: Started cri-containerd-608e16b0170cdf9f465cca4650a7d2df7dcd489103c4e84aa67ee3b156d56cc0.scope - libcontainer container 608e16b0170cdf9f465cca4650a7d2df7dcd489103c4e84aa67ee3b156d56cc0. Aug 13 00:43:48.593652 containerd[1580]: time="2025-08-13T00:43:48.593596649Z" level=info msg="StartContainer for \"608e16b0170cdf9f465cca4650a7d2df7dcd489103c4e84aa67ee3b156d56cc0\" returns successfully" Aug 13 00:43:49.046069 systemd-networkd[1477]: vxlan.calico: Gained IPv6LL Aug 13 00:43:49.513667 kubelet[2840]: I0813 00:43:49.513570 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-bxg4r" podStartSLOduration=32.558266685 podStartE2EDuration="37.513546627s" podCreationTimestamp="2025-08-13 00:43:12 +0000 UTC" firstStartedPulling="2025-08-13 00:43:43.486613941 +0000 UTC m=+45.754058808" lastFinishedPulling="2025-08-13 00:43:48.441893906 +0000 UTC m=+50.709338750" observedRunningTime="2025-08-13 00:43:49.5131816 +0000 UTC m=+51.780626470" watchObservedRunningTime="2025-08-13 00:43:49.513546627 +0000 UTC m=+51.780991497" Aug 13 00:43:50.485345 kubelet[2840]: I0813 00:43:50.484839 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:43:51.841724 containerd[1580]: time="2025-08-13T00:43:51.841629497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:51.937240 ntpd[1543]: Listen normally on 7 vxlan.calico 192.168.62.64:123 Aug 13 00:43:51.937918 ntpd[1543]: 13 Aug 00:43:51 ntpd[1543]: Listen normally on 7 vxlan.calico 192.168.62.64:123 Aug 13 00:43:51.937918 ntpd[1543]: 13 Aug 00:43:51 ntpd[1543]: Listen normally on 8 cali09acf21f181 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 00:43:51.937918 ntpd[1543]: 13 Aug 00:43:51 ntpd[1543]: Listen normally on 9 calicdd19a90d27 [fe80::ecee:eeff:feee:eeee%5]:123 Aug 13 00:43:51.937918 ntpd[1543]: 13 Aug 00:43:51 ntpd[1543]: Listen normally on 10 cali09c58d97dd3 [fe80::ecee:eeff:feee:eeee%6]:123 Aug 13 00:43:51.937918 ntpd[1543]: 13 Aug 00:43:51 ntpd[1543]: Listen normally on 11 calicf2d39eacf9 [fe80::ecee:eeff:feee:eeee%7]:123 Aug 13 00:43:51.937918 ntpd[1543]: 13 Aug 00:43:51 ntpd[1543]: Listen normally on 12 cali8182fdb10af [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 00:43:51.937918 ntpd[1543]: 13 Aug 00:43:51 ntpd[1543]: Listen normally on 13 cali158ba873e69 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 00:43:51.937918 ntpd[1543]: 13 Aug 00:43:51 ntpd[1543]: Listen normally on 14 caliaf49b6dfd45 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 00:43:51.937918 ntpd[1543]: 13 Aug 00:43:51 ntpd[1543]: Listen normally on 15 cali1e3129d247f [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 00:43:51.937375 ntpd[1543]: Listen normally on 8 cali09acf21f181 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 00:43:51.938457 ntpd[1543]: 13 Aug 00:43:51 ntpd[1543]: Listen normally on 16 vxlan.calico [fe80::6497:4bff:fe60:bdf2%12]:123 Aug 13 00:43:51.937461 ntpd[1543]: Listen normally on 9 calicdd19a90d27 [fe80::ecee:eeff:feee:eeee%5]:123 Aug 13 00:43:51.937523 ntpd[1543]: Listen normally on 10 cali09c58d97dd3 [fe80::ecee:eeff:feee:eeee%6]:123 Aug 13 00:43:51.937583 ntpd[1543]: Listen normally on 11 calicf2d39eacf9 [fe80::ecee:eeff:feee:eeee%7]:123 Aug 13 00:43:51.937663 ntpd[1543]: Listen normally on 12 cali8182fdb10af [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 00:43:51.937799 ntpd[1543]: Listen normally on 13 cali158ba873e69 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 00:43:51.937861 ntpd[1543]: Listen normally on 14 caliaf49b6dfd45 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 00:43:51.937911 ntpd[1543]: Listen normally on 15 cali1e3129d247f [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 00:43:51.939051 containerd[1580]: time="2025-08-13T00:43:51.938785432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 00:43:51.937973 ntpd[1543]: Listen normally on 16 vxlan.calico [fe80::6497:4bff:fe60:bdf2%12]:123 Aug 13 00:43:51.940484 containerd[1580]: time="2025-08-13T00:43:51.940355300Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:51.945575 containerd[1580]: time="2025-08-13T00:43:51.945190416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:51.945575 containerd[1580]: time="2025-08-13T00:43:51.945415380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.502604958s" Aug 13 00:43:51.945575 containerd[1580]: time="2025-08-13T00:43:51.945457234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 00:43:51.948151 containerd[1580]: time="2025-08-13T00:43:51.948067587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:43:51.974421 containerd[1580]: time="2025-08-13T00:43:51.974353572Z" level=info msg="CreateContainer within sandbox \"9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:43:51.986648 containerd[1580]: time="2025-08-13T00:43:51.986598839Z" level=info msg="Container 5efbab7cfe9538ce09a0e6b4b6418489beee85eab33922023d7da814b27a6178: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:52.002179 containerd[1580]: time="2025-08-13T00:43:52.002006769Z" level=info msg="CreateContainer within sandbox \"9b2dce50c913b889eb6ba1ce5401d128f24560e2ffd169b8812bc398014fafed\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5efbab7cfe9538ce09a0e6b4b6418489beee85eab33922023d7da814b27a6178\"" Aug 13 00:43:52.003087 containerd[1580]: time="2025-08-13T00:43:52.003029049Z" level=info msg="StartContainer for \"5efbab7cfe9538ce09a0e6b4b6418489beee85eab33922023d7da814b27a6178\"" Aug 13 00:43:52.006256 containerd[1580]: time="2025-08-13T00:43:52.006204333Z" level=info msg="connecting to shim 5efbab7cfe9538ce09a0e6b4b6418489beee85eab33922023d7da814b27a6178" address="unix:///run/containerd/s/90c0c43cbbd2d211448cb7714f05afd20b8552a9fbc01f7d283bdedb3b71a3d3" protocol=ttrpc version=3 Aug 13 00:43:52.047911 systemd[1]: Started cri-containerd-5efbab7cfe9538ce09a0e6b4b6418489beee85eab33922023d7da814b27a6178.scope - libcontainer container 5efbab7cfe9538ce09a0e6b4b6418489beee85eab33922023d7da814b27a6178. Aug 13 00:43:52.135735 containerd[1580]: time="2025-08-13T00:43:52.135583266Z" level=info msg="StartContainer for \"5efbab7cfe9538ce09a0e6b4b6418489beee85eab33922023d7da814b27a6178\" returns successfully" Aug 13 00:43:52.518450 kubelet[2840]: I0813 00:43:52.518187 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79bd578d96-ndvjd" podStartSLOduration=26.14849059 podStartE2EDuration="34.518164357s" podCreationTimestamp="2025-08-13 00:43:18 +0000 UTC" firstStartedPulling="2025-08-13 00:43:43.577416124 +0000 UTC m=+45.844860986" lastFinishedPulling="2025-08-13 00:43:51.947089908 +0000 UTC m=+54.214534753" observedRunningTime="2025-08-13 00:43:52.517532531 +0000 UTC m=+54.784977401" watchObservedRunningTime="2025-08-13 00:43:52.518164357 +0000 UTC m=+54.785609222" Aug 13 00:43:52.574173 containerd[1580]: time="2025-08-13T00:43:52.574113426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5efbab7cfe9538ce09a0e6b4b6418489beee85eab33922023d7da814b27a6178\" id:\"2dfff34f67c280b74fc747f1b3908a0c1fe9075e4304acababf9212aa9ed350c\" pid:5066 exited_at:{seconds:1755045832 nanos:573233104}" Aug 13 00:43:53.042195 containerd[1580]: time="2025-08-13T00:43:53.042122019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:53.043644 containerd[1580]: time="2025-08-13T00:43:53.043576569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 00:43:53.045241 containerd[1580]: time="2025-08-13T00:43:53.045129701Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:53.048413 containerd[1580]: time="2025-08-13T00:43:53.048294897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:53.049653 containerd[1580]: time="2025-08-13T00:43:53.049450373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.101322309s" Aug 13 00:43:53.049653 containerd[1580]: time="2025-08-13T00:43:53.049498499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 00:43:53.051379 containerd[1580]: time="2025-08-13T00:43:53.051251983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:43:53.057800 containerd[1580]: time="2025-08-13T00:43:53.057738872Z" level=info msg="CreateContainer within sandbox \"00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:43:53.075444 containerd[1580]: time="2025-08-13T00:43:53.074564590Z" level=info msg="Container 0d7935476070c26b65525eeebe94e9778003942b8d61f4fd5fa20452e8083c60: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:53.092158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414828233.mount: Deactivated successfully. Aug 13 00:43:53.103288 containerd[1580]: time="2025-08-13T00:43:53.103224276Z" level=info msg="CreateContainer within sandbox \"00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0d7935476070c26b65525eeebe94e9778003942b8d61f4fd5fa20452e8083c60\"" Aug 13 00:43:53.104361 containerd[1580]: time="2025-08-13T00:43:53.103940502Z" level=info msg="StartContainer for \"0d7935476070c26b65525eeebe94e9778003942b8d61f4fd5fa20452e8083c60\"" Aug 13 00:43:53.107350 containerd[1580]: time="2025-08-13T00:43:53.107305362Z" level=info msg="connecting to shim 0d7935476070c26b65525eeebe94e9778003942b8d61f4fd5fa20452e8083c60" address="unix:///run/containerd/s/4c4e2b4e1a1cc75052a0d712a4b9f438976c63de985f07e965f67e7761a1ee1d" protocol=ttrpc version=3 Aug 13 00:43:53.160024 systemd[1]: Started cri-containerd-0d7935476070c26b65525eeebe94e9778003942b8d61f4fd5fa20452e8083c60.scope - libcontainer container 0d7935476070c26b65525eeebe94e9778003942b8d61f4fd5fa20452e8083c60. Aug 13 00:43:53.243722 containerd[1580]: time="2025-08-13T00:43:53.243555492Z" level=info msg="StartContainer for \"0d7935476070c26b65525eeebe94e9778003942b8d61f4fd5fa20452e8083c60\" returns successfully" Aug 13 00:43:55.169785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751773407.mount: Deactivated successfully. Aug 13 00:43:56.002360 containerd[1580]: time="2025-08-13T00:43:56.002289475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:56.004026 containerd[1580]: time="2025-08-13T00:43:56.003957974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 00:43:56.005828 containerd[1580]: time="2025-08-13T00:43:56.005737938Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:56.009108 containerd[1580]: time="2025-08-13T00:43:56.009012429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:56.010615 containerd[1580]: time="2025-08-13T00:43:56.009937826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.958315989s" Aug 13 00:43:56.010615 containerd[1580]: time="2025-08-13T00:43:56.009984473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 00:43:56.011964 containerd[1580]: time="2025-08-13T00:43:56.011924131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:43:56.016562 containerd[1580]: time="2025-08-13T00:43:56.016425360Z" level=info msg="CreateContainer within sandbox \"e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:43:56.032716 containerd[1580]: time="2025-08-13T00:43:56.029338100Z" level=info msg="Container f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:56.053539 containerd[1580]: time="2025-08-13T00:43:56.053453257Z" level=info msg="CreateContainer within sandbox \"e5e0cc72b8b060a012c2f82993cabb06e80670c5c14b4388217096236bd4eacc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288\"" Aug 13 00:43:56.057318 containerd[1580]: time="2025-08-13T00:43:56.054895436Z" level=info msg="StartContainer for \"f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288\"" Aug 13 00:43:56.058471 containerd[1580]: time="2025-08-13T00:43:56.058423922Z" level=info msg="connecting to shim f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288" address="unix:///run/containerd/s/15575405deac0ae8c3756b410274f818a6fc5da354b3582fbdbe1142f37ead36" protocol=ttrpc version=3 Aug 13 00:43:56.104987 systemd[1]: Started cri-containerd-f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288.scope - libcontainer container f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288. Aug 13 00:43:56.183027 containerd[1580]: time="2025-08-13T00:43:56.182950101Z" level=info msg="StartContainer for \"f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288\" returns successfully" Aug 13 00:43:56.215333 containerd[1580]: time="2025-08-13T00:43:56.214101707Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:56.215525 containerd[1580]: time="2025-08-13T00:43:56.215357378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:43:56.218772 containerd[1580]: time="2025-08-13T00:43:56.218673701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 206.700452ms" Aug 13 00:43:56.218949 containerd[1580]: time="2025-08-13T00:43:56.218818599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:43:56.220326 containerd[1580]: time="2025-08-13T00:43:56.220290721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:43:56.227853 containerd[1580]: time="2025-08-13T00:43:56.227797247Z" level=info msg="CreateContainer within sandbox \"cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:43:56.242659 containerd[1580]: time="2025-08-13T00:43:56.242574587Z" level=info msg="Container 6f420bf7627cb6ee634e2d77c7068356e1051b5843b706ff94e5db73e42edef7: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:56.266498 containerd[1580]: time="2025-08-13T00:43:56.265509354Z" level=info msg="CreateContainer within sandbox \"cb2f2e296259d11066f7282247de4cf1d1a6613c83003841439f69ad55644d88\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6f420bf7627cb6ee634e2d77c7068356e1051b5843b706ff94e5db73e42edef7\"" Aug 13 00:43:56.268733 containerd[1580]: time="2025-08-13T00:43:56.268659574Z" level=info msg="StartContainer for \"6f420bf7627cb6ee634e2d77c7068356e1051b5843b706ff94e5db73e42edef7\"" Aug 13 00:43:56.273322 containerd[1580]: time="2025-08-13T00:43:56.273272170Z" level=info msg="connecting to shim 6f420bf7627cb6ee634e2d77c7068356e1051b5843b706ff94e5db73e42edef7" address="unix:///run/containerd/s/ed9680eba10deab305f535aaa46d8f17c26d40e7fb38377d234fa6d45445e274" protocol=ttrpc version=3 Aug 13 00:43:56.313029 systemd[1]: Started cri-containerd-6f420bf7627cb6ee634e2d77c7068356e1051b5843b706ff94e5db73e42edef7.scope - libcontainer container 6f420bf7627cb6ee634e2d77c7068356e1051b5843b706ff94e5db73e42edef7. Aug 13 00:43:56.397661 containerd[1580]: time="2025-08-13T00:43:56.397521985Z" level=info msg="StartContainer for \"6f420bf7627cb6ee634e2d77c7068356e1051b5843b706ff94e5db73e42edef7\" returns successfully" Aug 13 00:43:56.581035 kubelet[2840]: I0813 00:43:56.580247 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d6f6c8bbc-5tds7" podStartSLOduration=34.098064876 podStartE2EDuration="44.580220846s" podCreationTimestamp="2025-08-13 00:43:12 +0000 UTC" firstStartedPulling="2025-08-13 00:43:45.737674191 +0000 UTC m=+48.005119050" lastFinishedPulling="2025-08-13 00:43:56.219830158 +0000 UTC m=+58.487275020" observedRunningTime="2025-08-13 00:43:56.544037456 +0000 UTC m=+58.811482326" watchObservedRunningTime="2025-08-13 00:43:56.580220846 +0000 UTC m=+58.847665720" Aug 13 00:43:56.582746 kubelet[2840]: I0813 00:43:56.581588 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-vklsd" podStartSLOduration=28.865818887 podStartE2EDuration="39.58156782s" podCreationTimestamp="2025-08-13 00:43:17 +0000 UTC" firstStartedPulling="2025-08-13 00:43:45.295889071 +0000 UTC m=+47.563333943" lastFinishedPulling="2025-08-13 00:43:56.011638017 +0000 UTC m=+58.279082876" observedRunningTime="2025-08-13 00:43:56.581337366 +0000 UTC m=+58.848782234" watchObservedRunningTime="2025-08-13 00:43:56.58156782 +0000 UTC m=+58.849012689" Aug 13 00:43:56.745330 containerd[1580]: time="2025-08-13T00:43:56.745272635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288\" id:\"968de07278795534a0d5c54090339f5b3220bf9219b1e9cf692b48031f51e4b3\" pid:5210 exit_status:1 exited_at:{seconds:1755045836 nanos:744319668}" Aug 13 00:43:57.525141 kubelet[2840]: I0813 00:43:57.524916 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:43:57.837568 containerd[1580]: time="2025-08-13T00:43:57.837373006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288\" id:\"f7a7e948be6128c604a16b14373f7f282b311d769654c6853a7185c7f922fd07\" pid:5243 exit_status:1 exited_at:{seconds:1755045837 nanos:834661279}" Aug 13 00:43:57.999900 containerd[1580]: time="2025-08-13T00:43:57.999834823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:58.002711 containerd[1580]: time="2025-08-13T00:43:58.002398502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 00:43:58.004113 containerd[1580]: time="2025-08-13T00:43:58.004069369Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:58.013341 containerd[1580]: time="2025-08-13T00:43:58.013251292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:43:58.017376 containerd[1580]: time="2025-08-13T00:43:58.017206669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.796697367s" Aug 13 00:43:58.017564 containerd[1580]: time="2025-08-13T00:43:58.017382895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 00:43:58.026289 containerd[1580]: time="2025-08-13T00:43:58.026222834Z" level=info msg="CreateContainer within sandbox \"00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:43:58.047237 containerd[1580]: time="2025-08-13T00:43:58.047108825Z" level=info msg="Container 68ce74d16fc3f84851c3c5d3f8ef6f799bc3186b150c0d31153adfac95108d72: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:43:58.062159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786942800.mount: Deactivated successfully. Aug 13 00:43:58.073677 containerd[1580]: time="2025-08-13T00:43:58.073589167Z" level=info msg="CreateContainer within sandbox \"00a2daf5c2f57dfb9224074d07e8e4cc1e647af0c05bf7cbc44fff556aa3e8a3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"68ce74d16fc3f84851c3c5d3f8ef6f799bc3186b150c0d31153adfac95108d72\"" Aug 13 00:43:58.075375 containerd[1580]: time="2025-08-13T00:43:58.075319623Z" level=info msg="StartContainer for \"68ce74d16fc3f84851c3c5d3f8ef6f799bc3186b150c0d31153adfac95108d72\"" Aug 13 00:43:58.078974 containerd[1580]: time="2025-08-13T00:43:58.078927739Z" level=info msg="connecting to shim 68ce74d16fc3f84851c3c5d3f8ef6f799bc3186b150c0d31153adfac95108d72" address="unix:///run/containerd/s/4c4e2b4e1a1cc75052a0d712a4b9f438976c63de985f07e965f67e7761a1ee1d" protocol=ttrpc version=3 Aug 13 00:43:58.148375 systemd[1]: Started cri-containerd-68ce74d16fc3f84851c3c5d3f8ef6f799bc3186b150c0d31153adfac95108d72.scope - libcontainer container 68ce74d16fc3f84851c3c5d3f8ef6f799bc3186b150c0d31153adfac95108d72. Aug 13 00:43:58.261180 containerd[1580]: time="2025-08-13T00:43:58.261124160Z" level=info msg="StartContainer for \"68ce74d16fc3f84851c3c5d3f8ef6f799bc3186b150c0d31153adfac95108d72\" returns successfully" Aug 13 00:43:58.704006 containerd[1580]: time="2025-08-13T00:43:58.703953166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288\" id:\"575584485ee9867a1fac197cc5067f6b5fa4c658d6c651e4e9fba78f2601ae5c\" pid:5309 exit_status:1 exited_at:{seconds:1755045838 nanos:703222459}" Aug 13 00:43:59.105214 kubelet[2840]: I0813 00:43:59.104771 2840 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:43:59.105214 kubelet[2840]: I0813 00:43:59.104827 2840 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:44:10.465621 containerd[1580]: time="2025-08-13T00:44:10.465540413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979\" id:\"593517fc0488123d1c927dd9e47c7d083350be6b5fac0619f7f620b782cd1166\" pid:5342 exited_at:{seconds:1755045850 nanos:465026531}" Aug 13 00:44:10.491511 kubelet[2840]: I0813 00:44:10.490642 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hnp88" podStartSLOduration=40.667689842 podStartE2EDuration="53.490617706s" podCreationTimestamp="2025-08-13 00:43:17 +0000 UTC" firstStartedPulling="2025-08-13 00:43:45.196859674 +0000 UTC m=+47.464304517" lastFinishedPulling="2025-08-13 00:43:58.019787533 +0000 UTC m=+60.287232381" observedRunningTime="2025-08-13 00:43:58.564088692 +0000 UTC m=+60.831533560" watchObservedRunningTime="2025-08-13 00:44:10.490617706 +0000 UTC m=+72.758062580" Aug 13 00:44:16.146712 kubelet[2840]: I0813 00:44:16.146634 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:44:16.364060 systemd[1]: Started sshd@9-10.128.0.26:22-139.178.68.195:37028.service - OpenSSH per-connection server daemon (139.178.68.195:37028). Aug 13 00:44:16.721416 sshd[5359]: Accepted publickey for core from 139.178.68.195 port 37028 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:16.723403 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:16.736260 systemd-logind[1553]: New session 10 of user core. Aug 13 00:44:16.742508 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:44:17.051749 sshd[5362]: Connection closed by 139.178.68.195 port 37028 Aug 13 00:44:17.053153 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:17.059261 systemd[1]: sshd@9-10.128.0.26:22-139.178.68.195:37028.service: Deactivated successfully. Aug 13 00:44:17.064153 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:44:17.066377 systemd-logind[1553]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:44:17.070139 systemd-logind[1553]: Removed session 10. Aug 13 00:44:21.096714 kubelet[2840]: I0813 00:44:21.096015 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:44:22.111051 systemd[1]: Started sshd@10-10.128.0.26:22-139.178.68.195:46602.service - OpenSSH per-connection server daemon (139.178.68.195:46602). Aug 13 00:44:22.444813 sshd[5378]: Accepted publickey for core from 139.178.68.195 port 46602 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:22.446866 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:22.457329 systemd-logind[1553]: New session 11 of user core. Aug 13 00:44:22.462962 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:44:22.568205 containerd[1580]: time="2025-08-13T00:44:22.568124148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5efbab7cfe9538ce09a0e6b4b6418489beee85eab33922023d7da814b27a6178\" id:\"8ba694887c266a68e2689b198df474780476bb4861fbd4af0728b649c93d7a45\" pid:5393 exited_at:{seconds:1755045862 nanos:567289536}" Aug 13 00:44:22.896523 sshd[5380]: Connection closed by 139.178.68.195 port 46602 Aug 13 00:44:22.898125 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:22.907492 systemd[1]: sshd@10-10.128.0.26:22-139.178.68.195:46602.service: Deactivated successfully. Aug 13 00:44:22.913509 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:44:22.916083 systemd-logind[1553]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:44:22.919186 systemd-logind[1553]: Removed session 11. Aug 13 00:44:27.959421 systemd[1]: Started sshd@11-10.128.0.26:22-139.178.68.195:46608.service - OpenSSH per-connection server daemon (139.178.68.195:46608). Aug 13 00:44:28.316968 sshd[5421]: Accepted publickey for core from 139.178.68.195 port 46608 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:28.317877 sshd-session[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:28.328394 systemd-logind[1553]: New session 12 of user core. Aug 13 00:44:28.337476 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:44:28.688570 sshd[5424]: Connection closed by 139.178.68.195 port 46608 Aug 13 00:44:28.690285 sshd-session[5421]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:28.705369 systemd[1]: sshd@11-10.128.0.26:22-139.178.68.195:46608.service: Deactivated successfully. Aug 13 00:44:28.711955 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:44:28.721232 systemd-logind[1553]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:44:28.725573 systemd-logind[1553]: Removed session 12. Aug 13 00:44:28.752786 systemd[1]: Started sshd@12-10.128.0.26:22-139.178.68.195:46618.service - OpenSSH per-connection server daemon (139.178.68.195:46618). Aug 13 00:44:28.880411 containerd[1580]: time="2025-08-13T00:44:28.880308764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288\" id:\"5565a562d09d284a45810ac0ff6cc5302e4af97e6ce710aa91076723b68f55cb\" pid:5446 exited_at:{seconds:1755045868 nanos:877545945}" Aug 13 00:44:29.091980 sshd[5456]: Accepted publickey for core from 139.178.68.195 port 46618 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:29.096522 sshd-session[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:29.107106 systemd-logind[1553]: New session 13 of user core. Aug 13 00:44:29.116100 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:44:29.500419 sshd[5464]: Connection closed by 139.178.68.195 port 46618 Aug 13 00:44:29.501080 sshd-session[5456]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:29.513981 systemd-logind[1553]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:44:29.515459 systemd[1]: sshd@12-10.128.0.26:22-139.178.68.195:46618.service: Deactivated successfully. Aug 13 00:44:29.525456 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:44:29.531524 systemd-logind[1553]: Removed session 13. Aug 13 00:44:29.566139 systemd[1]: Started sshd@13-10.128.0.26:22-139.178.68.195:46634.service - OpenSSH per-connection server daemon (139.178.68.195:46634). Aug 13 00:44:29.893601 sshd[5474]: Accepted publickey for core from 139.178.68.195 port 46634 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:29.895061 sshd-session[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:29.906626 systemd-logind[1553]: New session 14 of user core. Aug 13 00:44:29.914876 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:44:30.266816 sshd[5476]: Connection closed by 139.178.68.195 port 46634 Aug 13 00:44:30.268976 sshd-session[5474]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:30.280041 systemd-logind[1553]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:44:30.281185 systemd[1]: sshd@13-10.128.0.26:22-139.178.68.195:46634.service: Deactivated successfully. Aug 13 00:44:30.288835 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:44:30.294093 systemd-logind[1553]: Removed session 14. Aug 13 00:44:34.067017 containerd[1580]: time="2025-08-13T00:44:34.066947732Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5efbab7cfe9538ce09a0e6b4b6418489beee85eab33922023d7da814b27a6178\" id:\"0172bf6e7554986f2043d2b2dc2152566e3d41cdc95fef36fbfdc8bd0d4e677a\" pid:5501 exited_at:{seconds:1755045874 nanos:66313500}" Aug 13 00:44:35.330885 systemd[1]: Started sshd@14-10.128.0.26:22-139.178.68.195:39590.service - OpenSSH per-connection server daemon (139.178.68.195:39590). Aug 13 00:44:35.676827 sshd[5511]: Accepted publickey for core from 139.178.68.195 port 39590 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:35.679814 sshd-session[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:35.692234 systemd-logind[1553]: New session 15 of user core. Aug 13 00:44:35.700951 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:44:36.047843 sshd[5516]: Connection closed by 139.178.68.195 port 39590 Aug 13 00:44:36.048782 sshd-session[5511]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:36.057178 systemd-logind[1553]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:44:36.061518 systemd[1]: sshd@14-10.128.0.26:22-139.178.68.195:39590.service: Deactivated successfully. Aug 13 00:44:36.067345 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:44:36.073448 systemd-logind[1553]: Removed session 15. Aug 13 00:44:40.794399 containerd[1580]: time="2025-08-13T00:44:40.793999774Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979\" id:\"5561d37296164a237ec4a874338f3b6f2cbcf8240078e026b5d0822431b1dc8f\" pid:5542 exit_status:1 exited_at:{seconds:1755045880 nanos:792176915}" Aug 13 00:44:41.108734 systemd[1]: Started sshd@15-10.128.0.26:22-139.178.68.195:54892.service - OpenSSH per-connection server daemon (139.178.68.195:54892). Aug 13 00:44:41.435340 sshd[5555]: Accepted publickey for core from 139.178.68.195 port 54892 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:41.439208 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:41.457159 systemd-logind[1553]: New session 16 of user core. Aug 13 00:44:41.462990 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:44:41.798003 sshd[5557]: Connection closed by 139.178.68.195 port 54892 Aug 13 00:44:41.798982 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:41.809052 systemd[1]: sshd@15-10.128.0.26:22-139.178.68.195:54892.service: Deactivated successfully. Aug 13 00:44:41.815290 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:44:41.819376 systemd-logind[1553]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:44:41.823061 systemd-logind[1553]: Removed session 16. Aug 13 00:44:41.918640 containerd[1580]: time="2025-08-13T00:44:41.918583158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288\" id:\"42badb7b2a611d0748d1e41a442e862991f6cd7017e377e7f84dd7c367677b4b\" pid:5577 exited_at:{seconds:1755045881 nanos:917555293}" Aug 13 00:44:46.857113 systemd[1]: Started sshd@16-10.128.0.26:22-139.178.68.195:54894.service - OpenSSH per-connection server daemon (139.178.68.195:54894). Aug 13 00:44:46.943045 update_engine[1560]: I20250813 00:44:46.942962 1560 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 00:44:46.943045 update_engine[1560]: I20250813 00:44:46.943046 1560 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 00:44:46.943731 update_engine[1560]: I20250813 00:44:46.943297 1560 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 00:44:46.944318 update_engine[1560]: I20250813 00:44:46.944273 1560 omaha_request_params.cc:62] Current group set to beta Aug 13 00:44:46.944655 update_engine[1560]: I20250813 00:44:46.944454 1560 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 00:44:46.944655 update_engine[1560]: I20250813 00:44:46.944480 1560 update_attempter.cc:643] Scheduling an action processor start. Aug 13 00:44:46.944924 update_engine[1560]: I20250813 00:44:46.944883 1560 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:44:46.945074 update_engine[1560]: I20250813 00:44:46.945051 1560 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 00:44:46.945744 update_engine[1560]: I20250813 00:44:46.945286 1560 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:44:46.945744 update_engine[1560]: I20250813 00:44:46.945309 1560 omaha_request_action.cc:272] Request: Aug 13 00:44:46.945744 update_engine[1560]: Aug 13 00:44:46.945744 update_engine[1560]: Aug 13 00:44:46.945744 update_engine[1560]: Aug 13 00:44:46.945744 update_engine[1560]: Aug 13 00:44:46.945744 update_engine[1560]: Aug 13 00:44:46.945744 update_engine[1560]: Aug 13 00:44:46.945744 update_engine[1560]: Aug 13 00:44:46.945744 update_engine[1560]: Aug 13 00:44:46.945744 update_engine[1560]: I20250813 00:44:46.945323 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:44:46.946265 locksmithd[1631]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 00:44:46.949214 update_engine[1560]: I20250813 00:44:46.948607 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:44:46.949786 update_engine[1560]: I20250813 00:44:46.949743 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:44:47.199307 sshd[5594]: Accepted publickey for core from 139.178.68.195 port 54894 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:47.203164 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:47.217103 systemd-logind[1553]: New session 17 of user core. Aug 13 00:44:47.224952 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:44:47.393532 update_engine[1560]: E20250813 00:44:47.393458 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:44:47.393735 update_engine[1560]: I20250813 00:44:47.393601 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 00:44:47.552039 sshd[5596]: Connection closed by 139.178.68.195 port 54894 Aug 13 00:44:47.551081 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:47.561345 systemd[1]: sshd@16-10.128.0.26:22-139.178.68.195:54894.service: Deactivated successfully. Aug 13 00:44:47.566049 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:44:47.569470 systemd-logind[1553]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:44:47.573360 systemd-logind[1553]: Removed session 17. Aug 13 00:44:52.609049 systemd[1]: Started sshd@17-10.128.0.26:22-139.178.68.195:53116.service - OpenSSH per-connection server daemon (139.178.68.195:53116). Aug 13 00:44:52.640071 containerd[1580]: time="2025-08-13T00:44:52.639140937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5efbab7cfe9538ce09a0e6b4b6418489beee85eab33922023d7da814b27a6178\" id:\"f41043154c53226840ef375e578848030b6283d6a943a265439db6c872517aec\" pid:5621 exited_at:{seconds:1755045892 nanos:637651023}" Aug 13 00:44:52.946938 sshd[5628]: Accepted publickey for core from 139.178.68.195 port 53116 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:52.949312 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:52.959791 systemd-logind[1553]: New session 18 of user core. Aug 13 00:44:52.968942 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:44:53.298477 sshd[5633]: Connection closed by 139.178.68.195 port 53116 Aug 13 00:44:53.299655 sshd-session[5628]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:53.312315 systemd[1]: sshd@17-10.128.0.26:22-139.178.68.195:53116.service: Deactivated successfully. Aug 13 00:44:53.317805 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:44:53.320540 systemd-logind[1553]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:44:53.323490 systemd-logind[1553]: Removed session 18. Aug 13 00:44:53.360082 systemd[1]: Started sshd@18-10.128.0.26:22-139.178.68.195:53122.service - OpenSSH per-connection server daemon (139.178.68.195:53122). Aug 13 00:44:53.699311 sshd[5645]: Accepted publickey for core from 139.178.68.195 port 53122 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:53.702615 sshd-session[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:53.718492 systemd-logind[1553]: New session 19 of user core. Aug 13 00:44:53.724478 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:44:54.110537 sshd[5647]: Connection closed by 139.178.68.195 port 53122 Aug 13 00:44:54.113779 sshd-session[5645]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:54.122141 systemd-logind[1553]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:44:54.124297 systemd[1]: sshd@18-10.128.0.26:22-139.178.68.195:53122.service: Deactivated successfully. Aug 13 00:44:54.129963 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:44:54.134953 systemd-logind[1553]: Removed session 19. Aug 13 00:44:54.171065 systemd[1]: Started sshd@19-10.128.0.26:22-139.178.68.195:53134.service - OpenSSH per-connection server daemon (139.178.68.195:53134). Aug 13 00:44:54.506068 sshd[5657]: Accepted publickey for core from 139.178.68.195 port 53134 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:54.509109 sshd-session[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:54.522874 systemd-logind[1553]: New session 20 of user core. Aug 13 00:44:54.527946 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:44:55.813333 sshd[5659]: Connection closed by 139.178.68.195 port 53134 Aug 13 00:44:55.812498 sshd-session[5657]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:55.822432 systemd[1]: sshd@19-10.128.0.26:22-139.178.68.195:53134.service: Deactivated successfully. Aug 13 00:44:55.828533 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:44:55.834929 systemd-logind[1553]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:44:55.838278 systemd-logind[1553]: Removed session 20. Aug 13 00:44:55.872517 systemd[1]: Started sshd@20-10.128.0.26:22-139.178.68.195:53136.service - OpenSSH per-connection server daemon (139.178.68.195:53136). Aug 13 00:44:56.199473 sshd[5675]: Accepted publickey for core from 139.178.68.195 port 53136 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:56.202318 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:56.211906 systemd-logind[1553]: New session 21 of user core. Aug 13 00:44:56.218935 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:44:56.841012 sshd[5678]: Connection closed by 139.178.68.195 port 53136 Aug 13 00:44:56.842985 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:56.849856 systemd[1]: sshd@20-10.128.0.26:22-139.178.68.195:53136.service: Deactivated successfully. Aug 13 00:44:56.855082 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:44:56.858619 systemd-logind[1553]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:44:56.862850 systemd-logind[1553]: Removed session 21. Aug 13 00:44:56.900230 systemd[1]: Started sshd@21-10.128.0.26:22-139.178.68.195:53138.service - OpenSSH per-connection server daemon (139.178.68.195:53138). Aug 13 00:44:57.231722 sshd[5688]: Accepted publickey for core from 139.178.68.195 port 53138 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:44:57.232711 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:57.242049 systemd-logind[1553]: New session 22 of user core. Aug 13 00:44:57.250927 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:44:57.651118 sshd[5690]: Connection closed by 139.178.68.195 port 53138 Aug 13 00:44:57.653020 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:57.661556 systemd[1]: sshd@21-10.128.0.26:22-139.178.68.195:53138.service: Deactivated successfully. Aug 13 00:44:57.662153 systemd-logind[1553]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:44:57.668589 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:44:57.674278 systemd-logind[1553]: Removed session 22. Aug 13 00:44:57.938065 update_engine[1560]: I20250813 00:44:57.937877 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:44:57.940269 update_engine[1560]: I20250813 00:44:57.939832 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:44:57.940269 update_engine[1560]: I20250813 00:44:57.940194 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:44:57.952125 update_engine[1560]: E20250813 00:44:57.952061 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:44:57.952472 update_engine[1560]: I20250813 00:44:57.952411 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 00:44:58.716993 containerd[1580]: time="2025-08-13T00:44:58.716935033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2b919a013d9fffee6cef559e7b4958ef20c9cb425559d5b71030281e0c96288\" id:\"99f3c953bbb16f3fb38d4888aae2c5c8f55216c24060e5bb5ff7b748126e5cec\" pid:5716 exited_at:{seconds:1755045898 nanos:716441035}" Aug 13 00:45:02.716663 systemd[1]: Started sshd@22-10.128.0.26:22-139.178.68.195:53594.service - OpenSSH per-connection server daemon (139.178.68.195:53594). Aug 13 00:45:03.060670 sshd[5728]: Accepted publickey for core from 139.178.68.195 port 53594 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:45:03.062278 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:45:03.072046 systemd-logind[1553]: New session 23 of user core. Aug 13 00:45:03.079174 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:45:03.397599 sshd[5730]: Connection closed by 139.178.68.195 port 53594 Aug 13 00:45:03.398176 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Aug 13 00:45:03.408269 systemd-logind[1553]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:45:03.410562 systemd[1]: sshd@22-10.128.0.26:22-139.178.68.195:53594.service: Deactivated successfully. Aug 13 00:45:03.417309 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:45:03.422377 systemd-logind[1553]: Removed session 23. Aug 13 00:45:07.942680 update_engine[1560]: I20250813 00:45:07.941483 1560 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:45:07.942680 update_engine[1560]: I20250813 00:45:07.941951 1560 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:45:07.942680 update_engine[1560]: I20250813 00:45:07.942302 1560 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:45:07.967154 update_engine[1560]: E20250813 00:45:07.966964 1560 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:45:07.967154 update_engine[1560]: I20250813 00:45:07.967088 1560 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 00:45:08.459843 systemd[1]: Started sshd@23-10.128.0.26:22-139.178.68.195:53604.service - OpenSSH per-connection server daemon (139.178.68.195:53604). Aug 13 00:45:08.787775 sshd[5752]: Accepted publickey for core from 139.178.68.195 port 53604 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:45:08.790112 sshd-session[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:45:08.806301 systemd-logind[1553]: New session 24 of user core. Aug 13 00:45:08.812026 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:45:09.239677 sshd[5755]: Connection closed by 139.178.68.195 port 53604 Aug 13 00:45:09.241038 sshd-session[5752]: pam_unix(sshd:session): session closed for user core Aug 13 00:45:09.252514 systemd[1]: sshd@23-10.128.0.26:22-139.178.68.195:53604.service: Deactivated successfully. Aug 13 00:45:09.260524 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:45:09.269764 systemd-logind[1553]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:45:09.273872 systemd-logind[1553]: Removed session 24. Aug 13 00:45:10.527139 containerd[1580]: time="2025-08-13T00:45:10.527051365Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8bd7caba0d4bb235bb418ca6457334a7d292b5292f8db698ac8b489c78811979\" id:\"8c5dac7c876ec56c80c2936a4cc9c8df72c4f5755118d740f09286640599d00a\" pid:5779 exited_at:{seconds:1755045910 nanos:524069848}" Aug 13 00:45:14.303490 systemd[1]: Started sshd@24-10.128.0.26:22-139.178.68.195:39036.service - OpenSSH per-connection server daemon (139.178.68.195:39036). Aug 13 00:45:14.640085 sshd[5791]: Accepted publickey for core from 139.178.68.195 port 39036 ssh2: RSA SHA256:c4HJGLSnkN6JELzipj7D/dZP9T3YyokUceeKsB+L4Qg Aug 13 00:45:14.641140 sshd-session[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:45:14.655401 systemd-logind[1553]: New session 25 of user core. Aug 13 00:45:14.661904 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:45:15.001945 sshd[5793]: Connection closed by 139.178.68.195 port 39036 Aug 13 00:45:15.004227 sshd-session[5791]: pam_unix(sshd:session): session closed for user core Aug 13 00:45:15.015029 systemd-logind[1553]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:45:15.016445 systemd[1]: sshd@24-10.128.0.26:22-139.178.68.195:39036.service: Deactivated successfully. Aug 13 00:45:15.021673 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:45:15.028638 systemd-logind[1553]: Removed session 25.