Sep 16 05:02:23.147432 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 05:02:23.147472 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:02:23.147489 kernel: BIOS-provided physical RAM map: Sep 16 05:02:23.147502 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 16 05:02:23.147514 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 16 05:02:23.147528 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 16 05:02:23.147547 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 16 05:02:23.147562 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 16 05:02:23.147576 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd329fff] usable Sep 16 05:02:23.147590 kernel: BIOS-e820: [mem 0x00000000bd32a000-0x00000000bd331fff] ACPI data Sep 16 05:02:23.147604 kernel: BIOS-e820: [mem 0x00000000bd332000-0x00000000bf8ecfff] usable Sep 16 05:02:23.147618 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 16 05:02:23.147632 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 16 05:02:23.147646 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 16 05:02:23.147667 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 16 05:02:23.147683 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 16 05:02:23.147699 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 16 05:02:23.147715 kernel: NX (Execute Disable) protection: active Sep 16 05:02:23.147731 kernel: APIC: Static calls initialized Sep 16 05:02:23.147747 kernel: efi: EFI v2.7 by EDK II Sep 16 05:02:23.147763 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32a018 Sep 16 05:02:23.147779 kernel: random: crng init done Sep 16 05:02:23.147818 kernel: secureboot: Secure boot disabled Sep 16 05:02:23.147835 kernel: SMBIOS 2.4 present. Sep 16 05:02:23.147850 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 16 05:02:23.147866 kernel: DMI: Memory slots populated: 1/1 Sep 16 05:02:23.147882 kernel: Hypervisor detected: KVM Sep 16 05:02:23.147917 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 05:02:23.147948 kernel: kvm-clock: using sched offset of 14889887084 cycles Sep 16 05:02:23.147965 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 05:02:23.147980 kernel: tsc: Detected 2299.998 MHz processor Sep 16 05:02:23.147995 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 05:02:23.148016 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 05:02:23.148032 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 16 05:02:23.148048 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 16 05:02:23.148064 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 05:02:23.148080 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 16 05:02:23.148095 kernel: Using GB pages for direct mapping Sep 16 05:02:23.148110 kernel: ACPI: Early table checksum verification disabled Sep 16 05:02:23.148126 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 16 05:02:23.148152 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 16 05:02:23.148170 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 16 05:02:23.148187 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 16 05:02:23.148206 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 16 05:02:23.148221 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 16 05:02:23.148237 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 16 05:02:23.148258 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 16 05:02:23.148275 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 16 05:02:23.148292 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 16 05:02:23.148308 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 16 05:02:23.148323 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 16 05:02:23.148341 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 16 05:02:23.148357 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 16 05:02:23.148374 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 16 05:02:23.148389 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 16 05:02:23.148408 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 16 05:02:23.148862 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 16 05:02:23.148879 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 16 05:02:23.148895 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 16 05:02:23.148912 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 16 05:02:23.148937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 16 05:02:23.148954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 16 05:02:23.148970 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Sep 16 05:02:23.148986 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Sep 16 05:02:23.149008 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Sep 16 05:02:23.149023 kernel: Zone ranges: Sep 16 05:02:23.149039 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 05:02:23.149054 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 16 05:02:23.149069 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 16 05:02:23.149083 kernel: Device empty Sep 16 05:02:23.149099 kernel: Movable zone start for each node Sep 16 05:02:23.149114 kernel: Early memory node ranges Sep 16 05:02:23.149139 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 16 05:02:23.149153 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 16 05:02:23.149173 kernel: node 0: [mem 0x0000000000100000-0x00000000bd329fff] Sep 16 05:02:23.149189 kernel: node 0: [mem 0x00000000bd332000-0x00000000bf8ecfff] Sep 16 05:02:23.149205 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 16 05:02:23.149220 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 16 05:02:23.149236 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 16 05:02:23.149253 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 05:02:23.149270 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 16 05:02:23.149288 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 16 05:02:23.149304 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Sep 16 05:02:23.149324 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 16 05:02:23.149341 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 16 05:02:23.149357 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 16 05:02:23.149375 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 05:02:23.149391 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 16 05:02:23.150569 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 05:02:23.150591 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 05:02:23.150609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 05:02:23.150750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 05:02:23.150776 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 05:02:23.150794 kernel: CPU topo: Max. logical packages: 1 Sep 16 05:02:23.150835 kernel: CPU topo: Max. logical dies: 1 Sep 16 05:02:23.150974 kernel: CPU topo: Max. dies per package: 1 Sep 16 05:02:23.150991 kernel: CPU topo: Max. threads per core: 2 Sep 16 05:02:23.151008 kernel: CPU topo: Num. cores per package: 1 Sep 16 05:02:23.151025 kernel: CPU topo: Num. threads per package: 2 Sep 16 05:02:23.151043 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 16 05:02:23.151060 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 16 05:02:23.151083 kernel: Booting paravirtualized kernel on KVM Sep 16 05:02:23.151228 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 05:02:23.151247 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 16 05:02:23.151265 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 16 05:02:23.151282 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 16 05:02:23.151298 kernel: pcpu-alloc: [0] 0 1 Sep 16 05:02:23.151315 kernel: kvm-guest: PV spinlocks enabled Sep 16 05:02:23.151333 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 05:02:23.151353 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:02:23.151452 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 05:02:23.151469 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 16 05:02:23.151485 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 05:02:23.151502 kernel: Fallback order for Node 0: 0 Sep 16 05:02:23.151519 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Sep 16 05:02:23.151536 kernel: Policy zone: Normal Sep 16 05:02:23.151553 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 05:02:23.151571 kernel: software IO TLB: area num 2. Sep 16 05:02:23.151604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 05:02:23.151622 kernel: Kernel/User page tables isolation: enabled Sep 16 05:02:23.151639 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 05:02:23.151659 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 05:02:23.151676 kernel: Dynamic Preempt: voluntary Sep 16 05:02:23.151694 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 05:02:23.151719 kernel: rcu: RCU event tracing is enabled. Sep 16 05:02:23.151739 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 05:02:23.151758 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 05:02:23.151781 kernel: Rude variant of Tasks RCU enabled. Sep 16 05:02:23.152957 kernel: Tracing variant of Tasks RCU enabled. Sep 16 05:02:23.152985 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 05:02:23.153003 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 05:02:23.153020 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 05:02:23.153037 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 05:02:23.153056 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 05:02:23.153073 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 16 05:02:23.153099 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 05:02:23.153117 kernel: Console: colour dummy device 80x25 Sep 16 05:02:23.153135 kernel: printk: legacy console [ttyS0] enabled Sep 16 05:02:23.153155 kernel: ACPI: Core revision 20240827 Sep 16 05:02:23.153173 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 05:02:23.153201 kernel: x2apic enabled Sep 16 05:02:23.153220 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 05:02:23.153238 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 16 05:02:23.153256 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 16 05:02:23.153278 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 16 05:02:23.153296 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 16 05:02:23.153315 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 16 05:02:23.153334 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 05:02:23.153358 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 16 05:02:23.153376 kernel: Spectre V2 : Mitigation: IBRS Sep 16 05:02:23.153395 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 05:02:23.153413 kernel: RETBleed: Mitigation: IBRS Sep 16 05:02:23.153432 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 16 05:02:23.153454 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 16 05:02:23.153473 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 16 05:02:23.153492 kernel: MDS: Mitigation: Clear CPU buffers Sep 16 05:02:23.153510 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 16 05:02:23.153528 kernel: active return thunk: its_return_thunk Sep 16 05:02:23.153547 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 16 05:02:23.153565 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 05:02:23.153583 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 05:02:23.153606 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 05:02:23.153625 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 05:02:23.153644 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 16 05:02:23.153662 kernel: Freeing SMP alternatives memory: 32K Sep 16 05:02:23.153682 kernel: pid_max: default: 32768 minimum: 301 Sep 16 05:02:23.153700 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 05:02:23.153717 kernel: landlock: Up and running. Sep 16 05:02:23.153734 kernel: SELinux: Initializing. Sep 16 05:02:23.153753 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 05:02:23.153771 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 05:02:23.153795 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 16 05:02:23.155718 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 16 05:02:23.155739 kernel: signal: max sigframe size: 1776 Sep 16 05:02:23.155756 kernel: rcu: Hierarchical SRCU implementation. Sep 16 05:02:23.155774 kernel: rcu: Max phase no-delay instances is 400. Sep 16 05:02:23.155791 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 05:02:23.155824 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 16 05:02:23.155842 kernel: smp: Bringing up secondary CPUs ... Sep 16 05:02:23.155865 kernel: smpboot: x86: Booting SMP configuration: Sep 16 05:02:23.155883 kernel: .... node #0, CPUs: #1 Sep 16 05:02:23.155903 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 16 05:02:23.155922 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 16 05:02:23.155940 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 05:02:23.155965 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 16 05:02:23.155982 kernel: Memory: 7564024K/7860552K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 290704K reserved, 0K cma-reserved) Sep 16 05:02:23.156000 kernel: devtmpfs: initialized Sep 16 05:02:23.156018 kernel: x86/mm: Memory block size: 128MB Sep 16 05:02:23.156041 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 16 05:02:23.156060 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 05:02:23.156079 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 05:02:23.156096 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 05:02:23.156113 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 05:02:23.156129 kernel: audit: initializing netlink subsys (disabled) Sep 16 05:02:23.156146 kernel: audit: type=2000 audit(1757998939.019:1): state=initialized audit_enabled=0 res=1 Sep 16 05:02:23.156164 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 05:02:23.156195 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 05:02:23.156215 kernel: cpuidle: using governor menu Sep 16 05:02:23.156232 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 05:02:23.156250 kernel: dca service started, version 1.12.1 Sep 16 05:02:23.156267 kernel: PCI: Using configuration type 1 for base access Sep 16 05:02:23.156286 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 05:02:23.156304 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 05:02:23.156323 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 05:02:23.156342 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 05:02:23.156365 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 05:02:23.156383 kernel: ACPI: Added _OSI(Module Device) Sep 16 05:02:23.156402 kernel: ACPI: Added _OSI(Processor Device) Sep 16 05:02:23.156420 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 05:02:23.156439 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 16 05:02:23.156458 kernel: ACPI: Interpreter enabled Sep 16 05:02:23.156477 kernel: ACPI: PM: (supports S0 S3 S5) Sep 16 05:02:23.156495 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 05:02:23.156514 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 05:02:23.156532 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 16 05:02:23.156555 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 16 05:02:23.156573 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 05:02:23.156880 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 16 05:02:23.157089 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 16 05:02:23.157294 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 16 05:02:23.157320 kernel: PCI host bridge to bus 0000:00 Sep 16 05:02:23.157512 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 05:02:23.157702 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 05:02:23.157966 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 05:02:23.158148 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 16 05:02:23.158329 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 05:02:23.158534 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 16 05:02:23.158754 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 16 05:02:23.159015 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 16 05:02:23.159212 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 16 05:02:23.159405 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Sep 16 05:02:23.159593 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Sep 16 05:02:23.159780 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Sep 16 05:02:23.160926 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 16 05:02:23.161589 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Sep 16 05:02:23.162407 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Sep 16 05:02:23.162621 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 16 05:02:23.162857 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Sep 16 05:02:23.163066 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Sep 16 05:02:23.163092 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 05:02:23.163112 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 05:02:23.163136 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 05:02:23.163155 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 05:02:23.163181 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 16 05:02:23.163200 kernel: iommu: Default domain type: Translated Sep 16 05:02:23.163219 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 05:02:23.163237 kernel: efivars: Registered efivars operations Sep 16 05:02:23.163257 kernel: PCI: Using ACPI for IRQ routing Sep 16 05:02:23.163275 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 05:02:23.163294 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 16 05:02:23.163317 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 16 05:02:23.163335 kernel: e820: reserve RAM buffer [mem 0xbd32a000-0xbfffffff] Sep 16 05:02:23.163353 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 16 05:02:23.163372 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 16 05:02:23.163390 kernel: vgaarb: loaded Sep 16 05:02:23.163408 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 05:02:23.163427 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 05:02:23.163445 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 05:02:23.163464 kernel: pnp: PnP ACPI init Sep 16 05:02:23.163487 kernel: pnp: PnP ACPI: found 7 devices Sep 16 05:02:23.163506 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 05:02:23.163525 kernel: NET: Registered PF_INET protocol family Sep 16 05:02:23.163544 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 16 05:02:23.163563 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 16 05:02:23.163582 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 05:02:23.163601 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 05:02:23.163620 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 16 05:02:23.163639 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 16 05:02:23.163662 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 16 05:02:23.163680 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 16 05:02:23.163699 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 05:02:23.163716 kernel: NET: Registered PF_XDP protocol family Sep 16 05:02:23.163921 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 05:02:23.164088 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 05:02:23.164279 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 05:02:23.164445 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 16 05:02:23.164639 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 16 05:02:23.164663 kernel: PCI: CLS 0 bytes, default 64 Sep 16 05:02:23.164683 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 16 05:02:23.164702 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 16 05:02:23.164722 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 16 05:02:23.164740 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 16 05:02:23.164759 kernel: clocksource: Switched to clocksource tsc Sep 16 05:02:23.164778 kernel: Initialise system trusted keyrings Sep 16 05:02:23.164824 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 16 05:02:23.164843 kernel: Key type asymmetric registered Sep 16 05:02:23.164861 kernel: Asymmetric key parser 'x509' registered Sep 16 05:02:23.164879 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 05:02:23.164896 kernel: io scheduler mq-deadline registered Sep 16 05:02:23.164916 kernel: io scheduler kyber registered Sep 16 05:02:23.164933 kernel: io scheduler bfq registered Sep 16 05:02:23.164952 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 05:02:23.164971 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 16 05:02:23.165196 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 16 05:02:23.165222 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 16 05:02:23.165413 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 16 05:02:23.165437 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 16 05:02:23.165627 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 16 05:02:23.165651 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 05:02:23.165671 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 05:02:23.165689 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 16 05:02:23.165708 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 16 05:02:23.165733 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 16 05:02:23.165961 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 16 05:02:23.165989 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 05:02:23.166009 kernel: i8042: Warning: Keylock active Sep 16 05:02:23.166027 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 05:02:23.166045 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 05:02:23.166266 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 16 05:02:23.166461 kernel: rtc_cmos 00:00: registered as rtc0 Sep 16 05:02:23.166631 kernel: rtc_cmos 00:00: setting system clock to 2025-09-16T05:02:22 UTC (1757998942) Sep 16 05:02:23.166796 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 16 05:02:23.166865 kernel: intel_pstate: CPU model not supported Sep 16 05:02:23.166884 kernel: pstore: Using crash dump compression: deflate Sep 16 05:02:23.166903 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 05:02:23.166920 kernel: NET: Registered PF_INET6 protocol family Sep 16 05:02:23.166937 kernel: Segment Routing with IPv6 Sep 16 05:02:23.166955 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 05:02:23.166979 kernel: NET: Registered PF_PACKET protocol family Sep 16 05:02:23.166997 kernel: Key type dns_resolver registered Sep 16 05:02:23.167015 kernel: IPI shorthand broadcast: enabled Sep 16 05:02:23.167034 kernel: sched_clock: Marking stable (3668004219, 139930100)->(3835774600, -27840281) Sep 16 05:02:23.167052 kernel: registered taskstats version 1 Sep 16 05:02:23.167070 kernel: Loading compiled-in X.509 certificates Sep 16 05:02:23.167088 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 05:02:23.167106 kernel: Demotion targets for Node 0: null Sep 16 05:02:23.167124 kernel: Key type .fscrypt registered Sep 16 05:02:23.167146 kernel: Key type fscrypt-provisioning registered Sep 16 05:02:23.167164 kernel: ima: Allocated hash algorithm: sha1 Sep 16 05:02:23.167189 kernel: ima: No architecture policies found Sep 16 05:02:23.167207 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 05:02:23.167226 kernel: clk: Disabling unused clocks Sep 16 05:02:23.167243 kernel: Warning: unable to open an initial console. Sep 16 05:02:23.167263 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 05:02:23.167281 kernel: Write protecting the kernel read-only data: 24576k Sep 16 05:02:23.167303 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 05:02:23.167322 kernel: Run /init as init process Sep 16 05:02:23.167341 kernel: with arguments: Sep 16 05:02:23.167359 kernel: /init Sep 16 05:02:23.167377 kernel: with environment: Sep 16 05:02:23.167395 kernel: HOME=/ Sep 16 05:02:23.167413 kernel: TERM=linux Sep 16 05:02:23.167431 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 05:02:23.167452 systemd[1]: Successfully made /usr/ read-only. Sep 16 05:02:23.167479 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 05:02:23.167500 systemd[1]: Detected virtualization google. Sep 16 05:02:23.167519 systemd[1]: Detected architecture x86-64. Sep 16 05:02:23.167537 systemd[1]: Running in initrd. Sep 16 05:02:23.167556 systemd[1]: No hostname configured, using default hostname. Sep 16 05:02:23.167576 systemd[1]: Hostname set to . Sep 16 05:02:23.167595 systemd[1]: Initializing machine ID from random generator. Sep 16 05:02:23.167616 systemd[1]: Queued start job for default target initrd.target. Sep 16 05:02:23.167658 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:02:23.167682 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:02:23.167704 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 05:02:23.167724 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 05:02:23.167745 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 05:02:23.167772 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 05:02:23.167795 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 05:02:23.167845 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 05:02:23.167866 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:02:23.167887 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:02:23.167910 systemd[1]: Reached target paths.target - Path Units. Sep 16 05:02:23.167930 systemd[1]: Reached target slices.target - Slice Units. Sep 16 05:02:23.167955 systemd[1]: Reached target swap.target - Swaps. Sep 16 05:02:23.167974 systemd[1]: Reached target timers.target - Timer Units. Sep 16 05:02:23.167994 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 05:02:23.168013 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 05:02:23.168033 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 05:02:23.168052 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 05:02:23.168072 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:02:23.168092 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 05:02:23.168111 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:02:23.168134 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 05:02:23.168154 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 05:02:23.168181 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 05:02:23.168200 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 05:02:23.168221 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 05:02:23.168241 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 05:02:23.168261 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 05:02:23.168281 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 05:02:23.168304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:02:23.168325 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 05:02:23.168386 systemd-journald[205]: Collecting audit messages is disabled. Sep 16 05:02:23.168435 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:02:23.168455 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 05:02:23.168480 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 05:02:23.168502 systemd-journald[205]: Journal started Sep 16 05:02:23.168546 systemd-journald[205]: Runtime Journal (/run/log/journal/2004b44102f643e8b2017622756a90a8) is 8M, max 148.9M, 140.9M free. Sep 16 05:02:23.172836 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 05:02:23.183339 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 05:02:23.193702 systemd-modules-load[207]: Inserted module 'overlay' Sep 16 05:02:23.204235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:02:23.208238 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 05:02:23.219611 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 05:02:23.227975 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 05:02:23.231854 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 05:02:23.243638 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:02:23.253866 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 05:02:23.257380 systemd-modules-load[207]: Inserted module 'br_netfilter' Sep 16 05:02:23.257825 kernel: Bridge firewalling registered Sep 16 05:02:23.259638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 05:02:23.262003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:02:23.275373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:02:23.284422 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 05:02:23.291204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:02:23.296558 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 05:02:23.301101 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 05:02:23.342660 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:02:23.378368 systemd-resolved[245]: Positive Trust Anchors: Sep 16 05:02:23.378388 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 05:02:23.378461 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 05:02:23.386779 systemd-resolved[245]: Defaulting to hostname 'linux'. Sep 16 05:02:23.390743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 05:02:23.398045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:02:23.465844 kernel: SCSI subsystem initialized Sep 16 05:02:23.478836 kernel: Loading iSCSI transport class v2.0-870. Sep 16 05:02:23.490857 kernel: iscsi: registered transport (tcp) Sep 16 05:02:23.516097 kernel: iscsi: registered transport (qla4xxx) Sep 16 05:02:23.516180 kernel: QLogic iSCSI HBA Driver Sep 16 05:02:23.540257 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 05:02:23.565648 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:02:23.572853 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 05:02:23.634478 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 05:02:23.640160 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 05:02:23.697849 kernel: raid6: avx2x4 gen() 17619 MB/s Sep 16 05:02:23.714841 kernel: raid6: avx2x2 gen() 17912 MB/s Sep 16 05:02:23.732308 kernel: raid6: avx2x1 gen() 13612 MB/s Sep 16 05:02:23.732366 kernel: raid6: using algorithm avx2x2 gen() 17912 MB/s Sep 16 05:02:23.750274 kernel: raid6: .... xor() 18562 MB/s, rmw enabled Sep 16 05:02:23.750340 kernel: raid6: using avx2x2 recovery algorithm Sep 16 05:02:23.772850 kernel: xor: automatically using best checksumming function avx Sep 16 05:02:23.957854 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 05:02:23.966348 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 05:02:23.968826 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:02:24.002312 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 16 05:02:24.011454 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:02:24.013996 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 05:02:24.041735 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Sep 16 05:02:24.075307 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 05:02:24.081613 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 05:02:24.173075 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:02:24.180377 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 05:02:24.301195 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Sep 16 05:02:24.312176 kernel: scsi host0: Virtio SCSI HBA Sep 16 05:02:24.316825 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 16 05:02:24.331829 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 05:02:24.348834 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 16 05:02:24.366773 kernel: AES CTR mode by8 optimization enabled Sep 16 05:02:24.438008 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 16 05:02:24.444067 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 16 05:02:24.444370 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 16 05:02:24.444611 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 16 05:02:24.446050 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 16 05:02:24.437481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 05:02:24.437712 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:02:24.446753 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:02:24.452766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:02:24.460825 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 05:02:24.460893 kernel: GPT:17805311 != 25165823 Sep 16 05:02:24.460918 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 05:02:24.460950 kernel: GPT:17805311 != 25165823 Sep 16 05:02:24.460971 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 05:02:24.460994 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 05:02:24.462484 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:02:24.465717 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 16 05:02:24.508240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:02:24.570634 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 16 05:02:24.571549 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 05:02:24.599355 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 16 05:02:24.612340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 16 05:02:24.622988 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 16 05:02:24.623271 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 16 05:02:24.632029 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 05:02:24.636962 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:02:24.641974 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 05:02:24.647314 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 05:02:24.654044 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 05:02:24.672113 disk-uuid[606]: Primary Header is updated. Sep 16 05:02:24.672113 disk-uuid[606]: Secondary Entries is updated. Sep 16 05:02:24.672113 disk-uuid[606]: Secondary Header is updated. Sep 16 05:02:24.678376 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 05:02:24.686829 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 05:02:24.711834 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 05:02:25.726873 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 05:02:25.726961 disk-uuid[611]: The operation has completed successfully. Sep 16 05:02:25.811980 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 05:02:25.812153 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 05:02:25.856672 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 05:02:25.874362 sh[628]: Success Sep 16 05:02:25.897163 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 05:02:25.897275 kernel: device-mapper: uevent: version 1.0.3 Sep 16 05:02:25.898564 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 05:02:25.910865 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 16 05:02:25.995357 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 05:02:26.000139 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 05:02:26.023455 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 05:02:26.042871 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (640) Sep 16 05:02:26.046166 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 05:02:26.046236 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:02:26.064927 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 05:02:26.065031 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 05:02:26.065058 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 05:02:26.070387 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 05:02:26.071977 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 05:02:26.075349 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 05:02:26.077572 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 05:02:26.086699 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 05:02:26.129858 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (673) Sep 16 05:02:26.133990 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:02:26.134055 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:02:26.143265 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 05:02:26.143358 kernel: BTRFS info (device sda6): turning on async discard Sep 16 05:02:26.143383 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 05:02:26.151879 kernel: BTRFS info (device sda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:02:26.153035 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 05:02:26.160029 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 05:02:26.264087 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 05:02:26.269987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 05:02:26.365600 systemd-networkd[809]: lo: Link UP Sep 16 05:02:26.370894 systemd-networkd[809]: lo: Gained carrier Sep 16 05:02:26.384830 systemd-networkd[809]: Enumeration completed Sep 16 05:02:26.386074 systemd-networkd[809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:02:26.386082 systemd-networkd[809]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 05:02:26.399583 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 05:02:26.400134 systemd[1]: Reached target network.target - Network. Sep 16 05:02:26.402658 systemd-networkd[809]: eth0: Link UP Sep 16 05:02:26.403236 systemd-networkd[809]: eth0: Gained carrier Sep 16 05:02:26.403258 systemd-networkd[809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:02:26.419271 systemd-networkd[809]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f.c.flatcar-212911.internal' to 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f' Sep 16 05:02:26.419424 systemd-networkd[809]: eth0: DHCPv4 address 10.128.0.94/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 16 05:02:26.439419 ignition[732]: Ignition 2.22.0 Sep 16 05:02:26.439780 ignition[732]: Stage: fetch-offline Sep 16 05:02:26.439866 ignition[732]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:26.443469 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 05:02:26.439882 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:02:26.448945 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 05:02:26.440034 ignition[732]: parsed url from cmdline: "" Sep 16 05:02:26.440040 ignition[732]: no config URL provided Sep 16 05:02:26.440050 ignition[732]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 05:02:26.440063 ignition[732]: no config at "/usr/lib/ignition/user.ign" Sep 16 05:02:26.440075 ignition[732]: failed to fetch config: resource requires networking Sep 16 05:02:26.440385 ignition[732]: Ignition finished successfully Sep 16 05:02:26.493433 ignition[819]: Ignition 2.22.0 Sep 16 05:02:26.493451 ignition[819]: Stage: fetch Sep 16 05:02:26.493668 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:26.493686 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:02:26.493865 ignition[819]: parsed url from cmdline: "" Sep 16 05:02:26.506757 unknown[819]: fetched base config from "system" Sep 16 05:02:26.493873 ignition[819]: no config URL provided Sep 16 05:02:26.506765 unknown[819]: fetched base config from "system" Sep 16 05:02:26.493883 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 05:02:26.506772 unknown[819]: fetched user config from "gcp" Sep 16 05:02:26.493896 ignition[819]: no config at "/usr/lib/ignition/user.ign" Sep 16 05:02:26.510905 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 05:02:26.493937 ignition[819]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 16 05:02:26.513716 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 05:02:26.497596 ignition[819]: GET result: OK Sep 16 05:02:26.497844 ignition[819]: parsing config with SHA512: fb0df7e412396431125e03f7eb98dc12e9878a02381835fcd2a0889b99662f3a4549b80b63d350977e84d1b4a3568c918254f54a25b92ba3fbc1b7aeea576f00 Sep 16 05:02:26.508018 ignition[819]: fetch: fetch complete Sep 16 05:02:26.508025 ignition[819]: fetch: fetch passed Sep 16 05:02:26.508101 ignition[819]: Ignition finished successfully Sep 16 05:02:26.558865 ignition[826]: Ignition 2.22.0 Sep 16 05:02:26.558882 ignition[826]: Stage: kargs Sep 16 05:02:26.562409 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 05:02:26.559122 ignition[826]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:26.567502 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 05:02:26.559140 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:02:26.560241 ignition[826]: kargs: kargs passed Sep 16 05:02:26.560300 ignition[826]: Ignition finished successfully Sep 16 05:02:26.610653 ignition[833]: Ignition 2.22.0 Sep 16 05:02:26.610673 ignition[833]: Stage: disks Sep 16 05:02:26.614100 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 05:02:26.610923 ignition[833]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:26.617569 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 05:02:26.610940 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:02:26.619162 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 05:02:26.612159 ignition[833]: disks: disks passed Sep 16 05:02:26.626977 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 05:02:26.612218 ignition[833]: Ignition finished successfully Sep 16 05:02:26.630157 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 05:02:26.634210 systemd[1]: Reached target basic.target - Basic System. Sep 16 05:02:26.640614 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 05:02:26.688133 systemd-fsck[843]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 16 05:02:26.700920 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 05:02:26.706389 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 05:02:26.887843 kernel: EXT4-fs (sda9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 05:02:26.888985 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 05:02:26.892636 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 05:02:26.897039 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 05:02:26.910874 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 05:02:26.916703 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 05:02:26.916793 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 05:02:26.916866 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 05:02:26.927838 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (851) Sep 16 05:02:26.927892 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:02:26.930382 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:02:26.936202 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 05:02:26.936253 kernel: BTRFS info (device sda6): turning on async discard Sep 16 05:02:26.936288 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 05:02:26.935905 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 05:02:26.942361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 05:02:26.945327 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 05:02:27.064939 initrd-setup-root[875]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 05:02:27.075881 initrd-setup-root[882]: cut: /sysroot/etc/group: No such file or directory Sep 16 05:02:27.082466 initrd-setup-root[889]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 05:02:27.089536 initrd-setup-root[896]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 05:02:27.237274 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 05:02:27.240389 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 05:02:27.250145 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 05:02:27.267148 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 05:02:27.269982 kernel: BTRFS info (device sda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:02:27.301795 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 05:02:27.316204 ignition[963]: INFO : Ignition 2.22.0 Sep 16 05:02:27.319963 ignition[963]: INFO : Stage: mount Sep 16 05:02:27.319963 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:27.319963 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:02:27.319963 ignition[963]: INFO : mount: mount passed Sep 16 05:02:27.319963 ignition[963]: INFO : Ignition finished successfully Sep 16 05:02:27.320630 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 05:02:27.328447 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 05:02:27.357852 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 05:02:27.384851 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (975) Sep 16 05:02:27.387827 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:02:27.387889 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:02:27.394550 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 05:02:27.394611 kernel: BTRFS info (device sda6): turning on async discard Sep 16 05:02:27.394628 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 05:02:27.397253 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 05:02:27.434876 ignition[992]: INFO : Ignition 2.22.0 Sep 16 05:02:27.434876 ignition[992]: INFO : Stage: files Sep 16 05:02:27.441944 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:27.441944 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:02:27.441944 ignition[992]: DEBUG : files: compiled without relabeling support, skipping Sep 16 05:02:27.441944 ignition[992]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 05:02:27.441944 ignition[992]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 05:02:27.456890 ignition[992]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 05:02:27.456890 ignition[992]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 05:02:27.456890 ignition[992]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 05:02:27.456890 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 16 05:02:27.456890 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 16 05:02:27.443551 unknown[992]: wrote ssh authorized keys file for user: core Sep 16 05:02:27.927073 systemd-networkd[809]: eth0: Gained IPv6LL Sep 16 05:02:34.704250 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 05:02:35.110638 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 16 05:02:35.115059 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 16 05:02:35.115059 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 05:02:35.115059 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 05:02:35.115059 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 05:02:35.115059 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 05:02:35.115059 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 05:02:35.115059 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 05:02:35.115059 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 05:02:35.144964 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 05:02:35.144964 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 05:02:35.144964 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:02:35.144964 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:02:35.144964 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:02:35.144964 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 16 05:02:35.501729 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 16 05:02:35.916500 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:02:35.916500 ignition[992]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 16 05:02:35.925992 ignition[992]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 05:02:35.925992 ignition[992]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 05:02:35.925992 ignition[992]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 16 05:02:35.925992 ignition[992]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 16 05:02:35.925992 ignition[992]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 05:02:35.925992 ignition[992]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 05:02:35.925992 ignition[992]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 05:02:35.925992 ignition[992]: INFO : files: files passed Sep 16 05:02:35.925992 ignition[992]: INFO : Ignition finished successfully Sep 16 05:02:35.925562 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 05:02:35.928205 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 05:02:35.938505 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 05:02:35.959554 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 05:02:35.959712 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 05:02:35.977212 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:02:35.977212 initrd-setup-root-after-ignition[1022]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:02:35.977105 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 05:02:35.992955 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:02:35.981520 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 05:02:35.988187 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 05:02:36.063494 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 05:02:36.063663 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 05:02:36.068781 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 05:02:36.074063 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 05:02:36.078106 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 05:02:36.079430 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 05:02:36.114137 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 05:02:36.117062 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 05:02:36.146125 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:02:36.149075 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:02:36.155176 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 05:02:36.158292 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 05:02:36.158534 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 05:02:36.169028 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 05:02:36.169538 systemd[1]: Stopped target basic.target - Basic System. Sep 16 05:02:36.173514 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 05:02:36.177384 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 05:02:36.181376 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 05:02:36.185350 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 05:02:36.189463 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 05:02:36.193613 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 05:02:36.197457 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 05:02:36.200640 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 05:02:36.204392 systemd[1]: Stopped target swap.target - Swaps. Sep 16 05:02:36.208354 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 05:02:36.208795 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 05:02:36.216290 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:02:36.219537 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:02:36.223427 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 05:02:36.223701 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:02:36.228336 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 05:02:36.228619 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 05:02:36.235437 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 05:02:36.235884 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 05:02:36.238410 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 05:02:36.239225 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 05:02:36.244025 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 05:02:36.254923 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 05:02:36.255165 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:02:36.265192 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 05:02:36.268126 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 05:02:36.269007 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:02:36.275264 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 05:02:36.275492 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 05:02:36.293654 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 05:02:36.294144 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 05:02:36.307393 ignition[1046]: INFO : Ignition 2.22.0 Sep 16 05:02:36.307393 ignition[1046]: INFO : Stage: umount Sep 16 05:02:36.312367 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:02:36.312367 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:02:36.312367 ignition[1046]: INFO : umount: umount passed Sep 16 05:02:36.312367 ignition[1046]: INFO : Ignition finished successfully Sep 16 05:02:36.312101 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 05:02:36.312493 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 05:02:36.319758 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 05:02:36.321163 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 05:02:36.321335 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 05:02:36.326063 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 05:02:36.326149 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 05:02:36.332036 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 05:02:36.332120 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 05:02:36.338043 systemd[1]: Stopped target network.target - Network. Sep 16 05:02:36.342029 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 05:02:36.342135 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 05:02:36.348000 systemd[1]: Stopped target paths.target - Path Units. Sep 16 05:02:36.351910 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 05:02:36.352009 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:02:36.355938 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 05:02:36.356033 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 05:02:36.356195 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 05:02:36.356272 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 05:02:36.364022 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 05:02:36.364100 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 05:02:36.369994 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 05:02:36.370104 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 05:02:36.376020 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 05:02:36.376107 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 05:02:36.382353 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 05:02:36.388071 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 05:02:36.391447 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 05:02:36.391633 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 05:02:36.401729 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 05:02:36.402070 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 05:02:36.402210 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 05:02:36.409527 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 05:02:36.409957 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 05:02:36.410139 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 05:02:36.416053 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 05:02:36.417251 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 05:02:36.417321 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:02:36.421136 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 05:02:36.421203 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 05:02:36.430161 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 05:02:36.440976 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 05:02:36.441214 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 05:02:36.444270 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 05:02:36.444349 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:02:36.451237 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 05:02:36.451319 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 05:02:36.456993 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 05:02:36.457086 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:02:36.461295 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:02:36.467955 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 05:02:36.468063 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:02:36.472290 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 05:02:36.472547 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:02:36.479722 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 05:02:36.479870 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 05:02:36.486154 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 05:02:36.486218 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:02:36.493088 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 05:02:36.493165 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 05:02:36.500092 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 05:02:36.500177 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 05:02:36.509141 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 05:02:36.509217 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 05:02:36.520347 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 05:02:36.526054 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 05:02:36.526150 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:02:36.537327 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 05:02:36.537426 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:02:36.549554 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 05:02:36.549648 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:02:36.557185 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 05:02:36.557260 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 05:02:36.557306 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:02:36.557783 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 05:02:36.558014 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 05:02:36.564488 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 05:02:36.647951 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Sep 16 05:02:36.564632 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 05:02:36.571008 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 05:02:36.575086 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 05:02:36.604627 systemd[1]: Switching root. Sep 16 05:02:36.660025 systemd-journald[205]: Journal stopped Sep 16 05:02:39.345189 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 05:02:39.345247 kernel: SELinux: policy capability open_perms=1 Sep 16 05:02:39.345269 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 05:02:39.345286 kernel: SELinux: policy capability always_check_network=0 Sep 16 05:02:39.345303 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 05:02:39.345321 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 05:02:39.345345 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 05:02:39.345363 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 05:02:39.345381 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 05:02:39.345400 kernel: audit: type=1403 audit(1757998957.257:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 05:02:39.345422 systemd[1]: Successfully loaded SELinux policy in 68.989ms. Sep 16 05:02:39.345444 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.104ms. Sep 16 05:02:39.345466 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 05:02:39.345491 systemd[1]: Detected virtualization google. Sep 16 05:02:39.345513 systemd[1]: Detected architecture x86-64. Sep 16 05:02:39.345534 systemd[1]: Detected first boot. Sep 16 05:02:39.345556 systemd[1]: Initializing machine ID from random generator. Sep 16 05:02:39.345577 zram_generator::config[1090]: No configuration found. Sep 16 05:02:39.345603 kernel: Guest personality initialized and is inactive Sep 16 05:02:39.345622 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 05:02:39.345640 kernel: Initialized host personality Sep 16 05:02:39.345659 kernel: NET: Registered PF_VSOCK protocol family Sep 16 05:02:39.345707 systemd[1]: Populated /etc with preset unit settings. Sep 16 05:02:39.345731 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 05:02:39.345752 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 05:02:39.345779 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 05:02:39.345826 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 05:02:39.345849 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 05:02:39.345870 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 05:02:39.345892 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 05:02:39.345913 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 05:02:39.345934 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 05:02:39.345961 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 05:02:39.345982 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 05:02:39.346011 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 05:02:39.346032 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:02:39.346054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:02:39.346076 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 05:02:39.346097 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 05:02:39.346117 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 05:02:39.346146 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 05:02:39.346172 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 05:02:39.346195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:02:39.346217 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:02:39.346239 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 05:02:39.346260 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 05:02:39.346282 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 05:02:39.346306 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 05:02:39.346334 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:02:39.346356 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 05:02:39.346378 systemd[1]: Reached target slices.target - Slice Units. Sep 16 05:02:39.346399 systemd[1]: Reached target swap.target - Swaps. Sep 16 05:02:39.346421 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 05:02:39.346443 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 05:02:39.346465 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 05:02:39.346492 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:02:39.346515 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 05:02:39.346537 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:02:39.346560 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 05:02:39.346582 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 05:02:39.346604 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 05:02:39.346631 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 05:02:39.346655 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:39.346677 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 05:02:39.346701 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 05:02:39.346722 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 05:02:39.346745 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 05:02:39.346769 systemd[1]: Reached target machines.target - Containers. Sep 16 05:02:39.346792 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 05:02:39.347889 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:02:39.347919 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 05:02:39.347943 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 05:02:39.347967 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:02:39.347989 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 05:02:39.348021 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:02:39.348044 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 05:02:39.348065 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:02:39.348088 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 05:02:39.348116 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 05:02:39.348139 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 05:02:39.348160 kernel: loop: module loaded Sep 16 05:02:39.348181 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 05:02:39.348204 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 05:02:39.348224 kernel: fuse: init (API version 7.41) Sep 16 05:02:39.348245 kernel: ACPI: bus type drm_connector registered Sep 16 05:02:39.348267 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:02:39.348296 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 05:02:39.348319 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 05:02:39.348343 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 05:02:39.348366 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 05:02:39.348434 systemd-journald[1178]: Collecting audit messages is disabled. Sep 16 05:02:39.348485 systemd-journald[1178]: Journal started Sep 16 05:02:39.348529 systemd-journald[1178]: Runtime Journal (/run/log/journal/9f6fb103924446728c0ee1626241e3b3) is 8M, max 148.9M, 140.9M free. Sep 16 05:02:38.149887 systemd[1]: Queued start job for default target multi-user.target. Sep 16 05:02:38.170606 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 16 05:02:38.171248 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 05:02:39.361966 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 05:02:39.371865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 05:02:39.388827 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 05:02:39.395839 systemd[1]: Stopped verity-setup.service. Sep 16 05:02:39.424882 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:39.436826 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 05:02:39.447545 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 05:02:39.457209 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 05:02:39.466196 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 05:02:39.475164 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 05:02:39.484162 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 05:02:39.495150 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 05:02:39.504479 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 05:02:39.515427 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:02:39.526307 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 05:02:39.526579 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 05:02:39.537388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:02:39.537713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:02:39.548374 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 05:02:39.548653 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 05:02:39.558288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:02:39.558555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:02:39.570308 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 05:02:39.570578 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 05:02:39.581329 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:02:39.581603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:02:39.591327 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 05:02:39.601311 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:02:39.612308 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 05:02:39.623332 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 05:02:39.634360 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:02:39.658319 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 05:02:39.670351 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 05:02:39.686928 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 05:02:39.696006 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 05:02:39.696221 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 05:02:39.706357 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 05:02:39.718350 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 05:02:39.727243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:02:39.738127 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 05:02:39.750025 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 05:02:39.760238 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 05:02:39.768160 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 05:02:39.776986 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 05:02:39.781463 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:02:39.795579 systemd-journald[1178]: Time spent on flushing to /var/log/journal/9f6fb103924446728c0ee1626241e3b3 is 157.439ms for 958 entries. Sep 16 05:02:39.795579 systemd-journald[1178]: System Journal (/var/log/journal/9f6fb103924446728c0ee1626241e3b3) is 8M, max 584.8M, 576.8M free. Sep 16 05:02:40.009414 systemd-journald[1178]: Received client request to flush runtime journal. Sep 16 05:02:40.009509 kernel: loop0: detected capacity change from 0 to 128016 Sep 16 05:02:40.009554 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 05:02:39.805815 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 05:02:39.817267 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 05:02:39.829449 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 05:02:39.840307 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 05:02:39.870832 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 05:02:39.881340 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 05:02:39.895058 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 05:02:39.934661 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:02:39.985584 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 05:02:39.987937 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 05:02:40.013975 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 05:02:40.026069 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 05:02:40.045861 kernel: loop1: detected capacity change from 0 to 50736 Sep 16 05:02:40.045447 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 05:02:40.103326 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 16 05:02:40.103931 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 16 05:02:40.114469 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:02:40.144994 kernel: loop2: detected capacity change from 0 to 110984 Sep 16 05:02:40.233861 kernel: loop3: detected capacity change from 0 to 229808 Sep 16 05:02:40.328296 kernel: loop4: detected capacity change from 0 to 128016 Sep 16 05:02:40.369901 kernel: loop5: detected capacity change from 0 to 50736 Sep 16 05:02:40.419034 kernel: loop6: detected capacity change from 0 to 110984 Sep 16 05:02:40.478016 kernel: loop7: detected capacity change from 0 to 229808 Sep 16 05:02:40.516216 (sd-merge)[1235]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 16 05:02:40.523766 (sd-merge)[1235]: Merged extensions into '/usr'. Sep 16 05:02:40.543888 systemd[1]: Reload requested from client PID 1213 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 05:02:40.544316 systemd[1]: Reloading... Sep 16 05:02:40.691870 zram_generator::config[1257]: No configuration found. Sep 16 05:02:40.896613 ldconfig[1208]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 05:02:41.146940 systemd[1]: Reloading finished in 601 ms. Sep 16 05:02:41.168677 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 05:02:41.178621 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 05:02:41.201016 systemd[1]: Starting ensure-sysext.service... Sep 16 05:02:41.210334 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 05:02:41.238058 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 05:02:41.254780 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 05:02:41.255311 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 05:02:41.255918 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 05:02:41.256584 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 05:02:41.258551 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 05:02:41.259320 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Sep 16 05:02:41.259597 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Sep 16 05:02:41.260166 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:02:41.267990 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 05:02:41.268116 systemd-tmpfiles[1302]: Skipping /boot Sep 16 05:02:41.270252 systemd[1]: Reload requested from client PID 1301 ('systemctl') (unit ensure-sysext.service)... Sep 16 05:02:41.270280 systemd[1]: Reloading... Sep 16 05:02:41.284320 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 05:02:41.284340 systemd-tmpfiles[1302]: Skipping /boot Sep 16 05:02:41.339307 systemd-udevd[1305]: Using default interface naming scheme 'v255'. Sep 16 05:02:41.414842 zram_generator::config[1329]: No configuration found. Sep 16 05:02:41.904130 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 16 05:02:41.931850 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 16 05:02:41.957831 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 05:02:42.004107 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 05:02:42.004558 systemd[1]: Reloading finished in 733 ms. Sep 16 05:02:42.018182 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:02:42.032864 kernel: ACPI: button: Power Button [PWRF] Sep 16 05:02:42.053434 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:02:42.096254 kernel: EDAC MC: Ver: 3.0.0 Sep 16 05:02:42.103891 systemd[1]: Finished ensure-sysext.service. Sep 16 05:02:42.131837 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 16 05:02:42.161463 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 16 05:02:42.170016 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:42.174100 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:02:42.188415 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 05:02:42.199226 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:02:42.203541 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:02:42.219908 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 05:02:42.239979 kernel: ACPI: button: Sleep Button [SLPF] Sep 16 05:02:42.234161 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:02:42.249320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:02:42.262145 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 16 05:02:42.270165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:02:42.270435 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:02:42.274122 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 05:02:42.290237 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 05:02:42.305712 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 05:02:42.315959 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 05:02:42.329562 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 05:02:42.340033 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:02:42.343004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:02:42.343872 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:02:42.354683 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 05:02:42.355778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 05:02:42.365378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:02:42.365664 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:02:42.376995 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:02:42.377900 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:02:42.387608 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 05:02:42.416209 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 05:02:42.428199 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 16 05:02:42.439259 augenrules[1459]: No rules Sep 16 05:02:42.442497 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:02:42.443939 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:02:42.483137 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 16 05:02:42.499074 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 05:02:42.519207 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 16 05:02:42.535485 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 05:02:42.545985 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 05:02:42.546172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 05:02:42.552867 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 05:02:42.568222 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 05:02:42.571443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:02:42.585965 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 05:02:42.589061 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 05:02:42.600671 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 16 05:02:42.611538 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 05:02:42.652327 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 05:02:42.754028 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:02:42.785879 systemd-networkd[1439]: lo: Link UP Sep 16 05:02:42.785895 systemd-networkd[1439]: lo: Gained carrier Sep 16 05:02:42.789484 systemd-networkd[1439]: Enumeration completed Sep 16 05:02:42.789679 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 05:02:42.790834 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:02:42.790993 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 05:02:42.791879 systemd-networkd[1439]: eth0: Link UP Sep 16 05:02:42.792297 systemd-networkd[1439]: eth0: Gained carrier Sep 16 05:02:42.792431 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:02:42.796305 systemd-resolved[1440]: Positive Trust Anchors: Sep 16 05:02:42.796324 systemd-resolved[1440]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 05:02:42.796381 systemd-resolved[1440]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 05:02:42.803022 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 05:02:42.803312 systemd-networkd[1439]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f.c.flatcar-212911.internal' to 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f' Sep 16 05:02:42.803329 systemd-networkd[1439]: eth0: DHCPv4 address 10.128.0.94/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 16 05:02:42.804542 systemd-resolved[1440]: Defaulting to hostname 'linux'. Sep 16 05:02:42.814904 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 05:02:42.825074 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 05:02:42.834185 systemd[1]: Reached target network.target - Network. Sep 16 05:02:42.841955 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:02:42.851972 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 05:02:42.861106 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 05:02:42.871051 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 05:02:42.880949 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 05:02:42.891174 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 05:02:42.900166 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 05:02:42.909973 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 05:02:42.919970 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 05:02:42.920032 systemd[1]: Reached target paths.target - Path Units. Sep 16 05:02:42.927959 systemd[1]: Reached target timers.target - Timer Units. Sep 16 05:02:42.938762 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 05:02:42.949631 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 05:02:42.959177 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 05:02:42.969169 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 05:02:42.979981 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 05:02:43.000715 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 05:02:43.010423 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 05:02:43.022318 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 05:02:43.033233 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 05:02:43.043753 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 05:02:43.052997 systemd[1]: Reached target basic.target - Basic System. Sep 16 05:02:43.061099 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 05:02:43.061160 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 05:02:43.062926 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 05:02:43.084469 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 05:02:43.097279 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 05:02:43.129983 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 05:02:43.141768 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 05:02:43.157455 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 05:02:43.165980 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 05:02:43.167528 jq[1513]: false Sep 16 05:02:43.169086 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 05:02:43.174939 coreos-metadata[1508]: Sep 16 05:02:43.174 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 16 05:02:43.176252 coreos-metadata[1508]: Sep 16 05:02:43.176 INFO Fetch successful Sep 16 05:02:43.176355 coreos-metadata[1508]: Sep 16 05:02:43.176 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 16 05:02:43.176973 coreos-metadata[1508]: Sep 16 05:02:43.176 INFO Fetch successful Sep 16 05:02:43.176973 coreos-metadata[1508]: Sep 16 05:02:43.176 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 16 05:02:43.177891 coreos-metadata[1508]: Sep 16 05:02:43.177 INFO Fetch successful Sep 16 05:02:43.178073 coreos-metadata[1508]: Sep 16 05:02:43.178 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 16 05:02:43.178359 coreos-metadata[1508]: Sep 16 05:02:43.178 INFO Fetch successful Sep 16 05:02:43.181540 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 05:02:43.193793 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 05:02:43.198272 extend-filesystems[1514]: Found /dev/sda6 Sep 16 05:02:43.206775 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 05:02:43.211414 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing passwd entry cache Sep 16 05:02:43.211414 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting users, quitting Sep 16 05:02:43.211414 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 05:02:43.211414 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing group entry cache Sep 16 05:02:43.211414 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting groups, quitting Sep 16 05:02:43.211414 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 05:02:43.201450 oslogin_cache_refresh[1515]: Refreshing passwd entry cache Sep 16 05:02:43.205950 oslogin_cache_refresh[1515]: Failure getting users, quitting Sep 16 05:02:43.205975 oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 05:02:43.206038 oslogin_cache_refresh[1515]: Refreshing group entry cache Sep 16 05:02:43.209988 oslogin_cache_refresh[1515]: Failure getting groups, quitting Sep 16 05:02:43.210007 oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 05:02:43.213663 extend-filesystems[1514]: Found /dev/sda9 Sep 16 05:02:43.229097 extend-filesystems[1514]: Checking size of /dev/sda9 Sep 16 05:02:43.216516 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 05:02:43.247168 extend-filesystems[1514]: Resized partition /dev/sda9 Sep 16 05:02:43.268919 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 16 05:02:43.242151 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 05:02:43.269288 extend-filesystems[1536]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 05:02:43.293012 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 16 05:02:43.257137 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 05:02:43.281407 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 16 05:02:43.282859 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 05:02:43.284165 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 05:02:43.296416 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 05:02:43.306075 extend-filesystems[1536]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 16 05:02:43.306075 extend-filesystems[1536]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 16 05:02:43.306075 extend-filesystems[1536]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 16 05:02:43.384142 kernel: ntpd[1520]: segfault at 24 ip 00005573daf54aeb sp 00007fff11ec4e20 error 4 in ntpd[68aeb,5573daef2000+80000] likely on CPU 0 (core 0, socket 0) Sep 16 05:02:43.384228 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Sep 16 05:02:43.385738 extend-filesystems[1514]: Resized filesystem in /dev/sda9 Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: ---------------------------------------------------- Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: corporation. Support and training for ntp-4 are Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: available at https://www.nwtime.org/support Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: ---------------------------------------------------- Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: proto: precision = 0.074 usec (-24) Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: basedate set to 2025-09-04 Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: gps base set to 2025-09-07 (week 2383) Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: Listen normally on 3 eth0 10.128.0.94:123 Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: Listen normally on 4 lo [::1]:123 Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: bind(21) AF_INET6 [fe80::4001:aff:fe80:5e%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 05:02:43.393148 ntpd[1520]: 16 Sep 05:02:43 ntpd[1520]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:5e%2]:123 Sep 16 05:02:43.317147 ntpd[1520]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:02:43.324647 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 05:02:43.396004 update_engine[1540]: I20250916 05:02:43.380198 1540 main.cc:92] Flatcar Update Engine starting Sep 16 05:02:43.317232 ntpd[1520]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:02:43.373897 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 05:02:43.317247 ntpd[1520]: ---------------------------------------------------- Sep 16 05:02:43.374222 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 05:02:43.317261 ntpd[1520]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:02:43.374726 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 05:02:43.317274 ntpd[1520]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:02:43.375137 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 05:02:43.317286 ntpd[1520]: corporation. Support and training for ntp-4 are Sep 16 05:02:43.317301 ntpd[1520]: available at https://www.nwtime.org/support Sep 16 05:02:43.317313 ntpd[1520]: ---------------------------------------------------- Sep 16 05:02:43.332385 ntpd[1520]: proto: precision = 0.074 usec (-24) Sep 16 05:02:43.335645 ntpd[1520]: basedate set to 2025-09-04 Sep 16 05:02:43.335673 ntpd[1520]: gps base set to 2025-09-07 (week 2383) Sep 16 05:02:43.336414 ntpd[1520]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:02:43.336482 ntpd[1520]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:02:43.338973 ntpd[1520]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:02:43.339022 ntpd[1520]: Listen normally on 3 eth0 10.128.0.94:123 Sep 16 05:02:43.339070 ntpd[1520]: Listen normally on 4 lo [::1]:123 Sep 16 05:02:43.339116 ntpd[1520]: bind(21) AF_INET6 [fe80::4001:aff:fe80:5e%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 05:02:43.339147 ntpd[1520]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:5e%2]:123 Sep 16 05:02:43.400437 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 05:02:43.401869 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 05:02:43.411560 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 05:02:43.412991 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 05:02:43.429401 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 05:02:43.429903 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 05:02:43.439851 jq[1542]: true Sep 16 05:02:43.455928 systemd-coredump[1551]: Process 1520 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Sep 16 05:02:43.500374 (ntainerd)[1553]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 05:02:43.530390 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 05:02:43.547862 jq[1555]: true Sep 16 05:02:43.570532 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 05:02:43.579920 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Sep 16 05:02:43.590161 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 05:02:43.593240 systemd[1]: Started systemd-coredump@0-1551-0.service - Process Core Dump (PID 1551/UID 0). Sep 16 05:02:43.598868 tar[1550]: linux-amd64/LICENSE Sep 16 05:02:43.598868 tar[1550]: linux-amd64/helm Sep 16 05:02:43.776294 systemd-logind[1537]: Watching system buttons on /dev/input/event2 (Power Button) Sep 16 05:02:43.776333 systemd-logind[1537]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 16 05:02:43.776362 systemd-logind[1537]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 05:02:43.782044 systemd-logind[1537]: New seat seat0. Sep 16 05:02:43.783660 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 05:02:43.801943 bash[1587]: Updated "/home/core/.ssh/authorized_keys" Sep 16 05:02:43.802310 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 05:02:43.818284 systemd[1]: Starting sshkeys.service... Sep 16 05:02:43.872569 dbus-daemon[1509]: [system] SELinux support is enabled Sep 16 05:02:43.878209 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 05:02:43.885563 dbus-daemon[1509]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1439 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 16 05:02:43.895130 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 05:02:43.895384 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 05:02:43.899328 update_engine[1540]: I20250916 05:02:43.899108 1540 update_check_scheduler.cc:74] Next update check in 4m30s Sep 16 05:02:43.906103 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 05:02:43.906309 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 05:02:43.922825 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 16 05:02:43.928220 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 16 05:02:43.937933 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 16 05:02:43.948442 systemd[1]: Started update-engine.service - Update Engine. Sep 16 05:02:43.970700 sshd_keygen[1549]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 05:02:43.985655 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 16 05:02:44.037064 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 05:02:44.092548 coreos-metadata[1595]: Sep 16 05:02:44.092 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 16 05:02:44.098172 coreos-metadata[1595]: Sep 16 05:02:44.098 INFO Fetch failed with 404: resource not found Sep 16 05:02:44.098172 coreos-metadata[1595]: Sep 16 05:02:44.098 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 16 05:02:44.101159 coreos-metadata[1595]: Sep 16 05:02:44.100 INFO Fetch successful Sep 16 05:02:44.101159 coreos-metadata[1595]: Sep 16 05:02:44.100 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 16 05:02:44.112871 coreos-metadata[1595]: Sep 16 05:02:44.112 INFO Fetch failed with 404: resource not found Sep 16 05:02:44.112871 coreos-metadata[1595]: Sep 16 05:02:44.112 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 16 05:02:44.116255 coreos-metadata[1595]: Sep 16 05:02:44.113 INFO Fetch failed with 404: resource not found Sep 16 05:02:44.116255 coreos-metadata[1595]: Sep 16 05:02:44.113 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 16 05:02:44.116058 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 05:02:44.119135 coreos-metadata[1595]: Sep 16 05:02:44.117 INFO Fetch successful Sep 16 05:02:44.117959 systemd-coredump[1573]: Process 1520 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1520: #0 0x00005573daf54aeb n/a (ntpd + 0x68aeb) #1 0x00005573daefdcdf n/a (ntpd + 0x11cdf) #2 0x00005573daefe575 n/a (ntpd + 0x12575) #3 0x00005573daef9d8a n/a (ntpd + 0xdd8a) #4 0x00005573daefb5d3 n/a (ntpd + 0xf5d3) #5 0x00005573daf03fd1 n/a (ntpd + 0x17fd1) #6 0x00005573daef4c2d n/a (ntpd + 0x8c2d) #7 0x00007fe00588016c n/a (libc.so.6 + 0x2716c) #8 0x00007fe005880229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005573daef4c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Sep 16 05:02:44.125743 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Sep 16 05:02:44.126024 systemd[1]: ntpd.service: Failed with result 'core-dump'. Sep 16 05:02:44.128628 unknown[1595]: wrote ssh authorized keys file for user: core Sep 16 05:02:44.136748 systemd[1]: systemd-coredump@0-1551-0.service: Deactivated successfully. Sep 16 05:02:44.183813 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 05:02:44.197772 systemd[1]: Started sshd@0-10.128.0.94:22-139.178.68.195:47654.service - OpenSSH per-connection server daemon (139.178.68.195:47654). Sep 16 05:02:44.228188 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Sep 16 05:02:44.239975 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 05:02:44.261356 update-ssh-keys[1617]: Updated "/home/core/.ssh/authorized_keys" Sep 16 05:02:44.264499 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 16 05:02:44.274909 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 16 05:02:44.275842 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 16 05:02:44.277520 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1598 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 16 05:02:44.286021 systemd[1]: Finished sshkeys.service. Sep 16 05:02:44.286816 locksmithd[1603]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 05:02:44.292785 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 05:02:44.293150 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 05:02:44.324199 systemd[1]: Starting polkit.service - Authorization Manager... Sep 16 05:02:44.336784 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 05:02:44.378023 systemd-networkd[1439]: eth0: Gained IPv6LL Sep 16 05:02:44.381418 ntpd[1623]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: ---------------------------------------------------- Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: corporation. Support and training for ntp-4 are Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: available at https://www.nwtime.org/support Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: ---------------------------------------------------- Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: proto: precision = 0.106 usec (-23) Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: basedate set to 2025-09-04 Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: gps base set to 2025-09-07 (week 2383) Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: Listen normally on 3 eth0 10.128.0.94:123 Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: Listen normally on 4 lo [::1]:123 Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:5e%2]:123 Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: Listening on routing socket on fd #22 for interface updates Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:02:44.386325 ntpd[1623]: 16 Sep 05:02:44 ntpd[1623]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:02:44.381498 ntpd[1623]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:02:44.390910 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 05:02:44.381514 ntpd[1623]: ---------------------------------------------------- Sep 16 05:02:44.381526 ntpd[1623]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:02:44.381540 ntpd[1623]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:02:44.381554 ntpd[1623]: corporation. Support and training for ntp-4 are Sep 16 05:02:44.381567 ntpd[1623]: available at https://www.nwtime.org/support Sep 16 05:02:44.381579 ntpd[1623]: ---------------------------------------------------- Sep 16 05:02:44.382469 ntpd[1623]: proto: precision = 0.106 usec (-23) Sep 16 05:02:44.382763 ntpd[1623]: basedate set to 2025-09-04 Sep 16 05:02:44.382782 ntpd[1623]: gps base set to 2025-09-07 (week 2383) Sep 16 05:02:44.382922 ntpd[1623]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:02:44.382967 ntpd[1623]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:02:44.383218 ntpd[1623]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:02:44.383255 ntpd[1623]: Listen normally on 3 eth0 10.128.0.94:123 Sep 16 05:02:44.383294 ntpd[1623]: Listen normally on 4 lo [::1]:123 Sep 16 05:02:44.383329 ntpd[1623]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:5e%2]:123 Sep 16 05:02:44.383366 ntpd[1623]: Listening on routing socket on fd #22 for interface updates Sep 16 05:02:44.385045 ntpd[1623]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:02:44.385078 ntpd[1623]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:02:44.401565 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 05:02:44.420235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:02:44.436654 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 05:02:44.451864 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 16 05:02:44.461857 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 05:02:44.483457 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 05:02:44.499861 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 05:02:44.506483 containerd[1553]: time="2025-09-16T05:02:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 05:02:44.500505 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 05:02:44.520472 containerd[1553]: time="2025-09-16T05:02:44.520361979Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 05:02:44.538620 init.sh[1641]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 16 05:02:44.538620 init.sh[1641]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 16 05:02:44.542832 init.sh[1641]: + /usr/bin/google_instance_setup Sep 16 05:02:44.608717 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 05:02:44.617723 containerd[1553]: time="2025-09-16T05:02:44.616481024Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.374µs" Sep 16 05:02:44.619890 containerd[1553]: time="2025-09-16T05:02:44.619139344Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 05:02:44.619890 containerd[1553]: time="2025-09-16T05:02:44.619200226Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 05:02:44.619890 containerd[1553]: time="2025-09-16T05:02:44.619458154Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 05:02:44.619890 containerd[1553]: time="2025-09-16T05:02:44.619484140Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 05:02:44.619890 containerd[1553]: time="2025-09-16T05:02:44.619524698Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 05:02:44.619890 containerd[1553]: time="2025-09-16T05:02:44.619616036Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 05:02:44.619890 containerd[1553]: time="2025-09-16T05:02:44.619646002Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 05:02:44.623771 containerd[1553]: time="2025-09-16T05:02:44.620969727Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 05:02:44.623771 containerd[1553]: time="2025-09-16T05:02:44.621011143Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 05:02:44.623771 containerd[1553]: time="2025-09-16T05:02:44.621039228Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 05:02:44.623771 containerd[1553]: time="2025-09-16T05:02:44.621054355Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 05:02:44.623771 containerd[1553]: time="2025-09-16T05:02:44.621191265Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 05:02:44.623771 containerd[1553]: time="2025-09-16T05:02:44.621486080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 05:02:44.623771 containerd[1553]: time="2025-09-16T05:02:44.621539874Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 05:02:44.623771 containerd[1553]: time="2025-09-16T05:02:44.621558763Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 05:02:44.624405 containerd[1553]: time="2025-09-16T05:02:44.624267480Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 05:02:44.631026 containerd[1553]: time="2025-09-16T05:02:44.630991086Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 05:02:44.631581 containerd[1553]: time="2025-09-16T05:02:44.631223858Z" level=info msg="metadata content store policy set" policy=shared Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647116362Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647230457Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647258482Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647280958Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647303828Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647322517Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647341557Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647370729Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647390014Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647409402Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647425737Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647445802Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 05:02:44.647913 containerd[1553]: time="2025-09-16T05:02:44.647857779Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 05:02:44.649537 containerd[1553]: time="2025-09-16T05:02:44.648076598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 05:02:44.649537 containerd[1553]: time="2025-09-16T05:02:44.648116089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 05:02:44.651005 containerd[1553]: time="2025-09-16T05:02:44.649684431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 05:02:44.651005 containerd[1553]: time="2025-09-16T05:02:44.649738471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 05:02:44.651005 containerd[1553]: time="2025-09-16T05:02:44.649759052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 05:02:44.651005 containerd[1553]: time="2025-09-16T05:02:44.649781314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 05:02:44.651005 containerd[1553]: time="2025-09-16T05:02:44.650131851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 05:02:44.651005 containerd[1553]: time="2025-09-16T05:02:44.650173837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 05:02:44.651005 containerd[1553]: time="2025-09-16T05:02:44.650875145Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 05:02:44.651005 containerd[1553]: time="2025-09-16T05:02:44.650930157Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 05:02:44.652675 containerd[1553]: time="2025-09-16T05:02:44.652028833Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 05:02:44.652675 containerd[1553]: time="2025-09-16T05:02:44.652068127Z" level=info msg="Start snapshots syncer" Sep 16 05:02:44.652675 containerd[1553]: time="2025-09-16T05:02:44.652627811Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 05:02:44.661172 containerd[1553]: time="2025-09-16T05:02:44.659104387Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 05:02:44.661172 containerd[1553]: time="2025-09-16T05:02:44.659220481Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 05:02:44.661406 containerd[1553]: time="2025-09-16T05:02:44.660586649Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 05:02:44.661406 containerd[1553]: time="2025-09-16T05:02:44.660777225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 05:02:44.661742 containerd[1553]: time="2025-09-16T05:02:44.661599961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 05:02:44.661742 containerd[1553]: time="2025-09-16T05:02:44.661660067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 05:02:44.661742 containerd[1553]: time="2025-09-16T05:02:44.661680194Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 05:02:44.661742 containerd[1553]: time="2025-09-16T05:02:44.661701140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 05:02:44.664475 containerd[1553]: time="2025-09-16T05:02:44.662349842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 05:02:44.664475 containerd[1553]: time="2025-09-16T05:02:44.663855341Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 05:02:44.664475 containerd[1553]: time="2025-09-16T05:02:44.663926534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 05:02:44.664475 containerd[1553]: time="2025-09-16T05:02:44.663958996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 05:02:44.664475 containerd[1553]: time="2025-09-16T05:02:44.664003673Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 05:02:44.664475 containerd[1553]: time="2025-09-16T05:02:44.664411232Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 05:02:44.665990 containerd[1553]: time="2025-09-16T05:02:44.664871729Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 05:02:44.665990 containerd[1553]: time="2025-09-16T05:02:44.665782301Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 05:02:44.665990 containerd[1553]: time="2025-09-16T05:02:44.665845346Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 05:02:44.665990 containerd[1553]: time="2025-09-16T05:02:44.665863548Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 05:02:44.665990 containerd[1553]: time="2025-09-16T05:02:44.665884417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 05:02:44.665990 containerd[1553]: time="2025-09-16T05:02:44.665923969Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 05:02:44.665990 containerd[1553]: time="2025-09-16T05:02:44.665951861Z" level=info msg="runtime interface created" Sep 16 05:02:44.665990 containerd[1553]: time="2025-09-16T05:02:44.665961244Z" level=info msg="created NRI interface" Sep 16 05:02:44.666917 containerd[1553]: time="2025-09-16T05:02:44.666449976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 05:02:44.666917 containerd[1553]: time="2025-09-16T05:02:44.666856099Z" level=info msg="Connect containerd service" Sep 16 05:02:44.670578 containerd[1553]: time="2025-09-16T05:02:44.667907275Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 05:02:44.672292 containerd[1553]: time="2025-09-16T05:02:44.672256772Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 05:02:44.800794 sshd[1622]: Accepted publickey for core from 139.178.68.195 port 47654 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:02:44.808577 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:44.835324 polkitd[1633]: Started polkitd version 126 Sep 16 05:02:44.844473 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 05:02:44.857284 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 05:02:44.879703 polkitd[1633]: Loading rules from directory /etc/polkit-1/rules.d Sep 16 05:02:44.883269 polkitd[1633]: Loading rules from directory /run/polkit-1/rules.d Sep 16 05:02:44.883372 polkitd[1633]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 05:02:44.883961 polkitd[1633]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 16 05:02:44.884010 polkitd[1633]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 05:02:44.884647 polkitd[1633]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 16 05:02:44.900814 polkitd[1633]: Finished loading, compiling and executing 2 rules Sep 16 05:02:44.901599 systemd[1]: Started polkit.service - Authorization Manager. Sep 16 05:02:44.911494 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 16 05:02:44.915468 polkitd[1633]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 16 05:02:44.931111 systemd-logind[1537]: New session 1 of user core. Sep 16 05:02:44.953033 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 05:02:44.970312 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 05:02:45.022722 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 05:02:45.036673 systemd-hostnamed[1598]: Hostname set to (transient) Sep 16 05:02:45.038484 systemd-resolved[1440]: System hostname changed to 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f'. Sep 16 05:02:45.038722 systemd-logind[1537]: New session c1 of user core. Sep 16 05:02:45.109252 tar[1550]: linux-amd64/README.md Sep 16 05:02:45.156540 containerd[1553]: time="2025-09-16T05:02:45.156481427Z" level=info msg="Start subscribing containerd event" Sep 16 05:02:45.157361 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 05:02:45.159615 containerd[1553]: time="2025-09-16T05:02:45.156783302Z" level=info msg="Start recovering state" Sep 16 05:02:45.159615 containerd[1553]: time="2025-09-16T05:02:45.156602002Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 05:02:45.159615 containerd[1553]: time="2025-09-16T05:02:45.159184691Z" level=info msg="Start event monitor" Sep 16 05:02:45.159615 containerd[1553]: time="2025-09-16T05:02:45.159209693Z" level=info msg="Start cni network conf syncer for default" Sep 16 05:02:45.159615 containerd[1553]: time="2025-09-16T05:02:45.159222679Z" level=info msg="Start streaming server" Sep 16 05:02:45.159615 containerd[1553]: time="2025-09-16T05:02:45.159237171Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 05:02:45.159615 containerd[1553]: time="2025-09-16T05:02:45.159249047Z" level=info msg="runtime interface starting up..." Sep 16 05:02:45.159615 containerd[1553]: time="2025-09-16T05:02:45.159259366Z" level=info msg="starting plugins..." Sep 16 05:02:45.159615 containerd[1553]: time="2025-09-16T05:02:45.159279335Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 05:02:45.160744 containerd[1553]: time="2025-09-16T05:02:45.160693649Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 05:02:45.164964 containerd[1553]: time="2025-09-16T05:02:45.164933521Z" level=info msg="containerd successfully booted in 0.670734s" Sep 16 05:02:45.169048 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 05:02:45.397893 systemd[1674]: Queued start job for default target default.target. Sep 16 05:02:45.405400 systemd[1674]: Created slice app.slice - User Application Slice. Sep 16 05:02:45.406869 systemd[1674]: Reached target paths.target - Paths. Sep 16 05:02:45.406964 systemd[1674]: Reached target timers.target - Timers. Sep 16 05:02:45.411324 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 05:02:45.441580 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 05:02:45.443066 systemd[1674]: Reached target sockets.target - Sockets. Sep 16 05:02:45.443149 systemd[1674]: Reached target basic.target - Basic System. Sep 16 05:02:45.443223 systemd[1674]: Reached target default.target - Main User Target. Sep 16 05:02:45.443276 systemd[1674]: Startup finished in 389ms. Sep 16 05:02:45.443606 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 05:02:45.467153 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 05:02:45.681853 instance-setup[1650]: INFO Running google_set_multiqueue. Sep 16 05:02:45.708924 instance-setup[1650]: INFO Set channels for eth0 to 2. Sep 16 05:02:45.714164 systemd[1]: Started sshd@1-10.128.0.94:22-139.178.68.195:47656.service - OpenSSH per-connection server daemon (139.178.68.195:47656). Sep 16 05:02:45.717487 instance-setup[1650]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 16 05:02:45.720060 instance-setup[1650]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 16 05:02:45.721595 instance-setup[1650]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 16 05:02:45.722918 instance-setup[1650]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 16 05:02:45.723532 instance-setup[1650]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 16 05:02:45.727549 instance-setup[1650]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 16 05:02:45.728235 instance-setup[1650]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 16 05:02:45.730349 instance-setup[1650]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 16 05:02:45.744193 instance-setup[1650]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 16 05:02:45.750584 instance-setup[1650]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 16 05:02:45.752948 instance-setup[1650]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 16 05:02:45.753008 instance-setup[1650]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 16 05:02:45.790795 init.sh[1641]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 16 05:02:45.962035 startup-script[1727]: INFO Starting startup scripts. Sep 16 05:02:45.968613 startup-script[1727]: INFO No startup scripts found in metadata. Sep 16 05:02:45.968691 startup-script[1727]: INFO Finished running startup scripts. Sep 16 05:02:45.996258 init.sh[1641]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 16 05:02:45.996258 init.sh[1641]: + daemon_pids=() Sep 16 05:02:45.996258 init.sh[1641]: + for d in accounts clock_skew network Sep 16 05:02:45.998297 init.sh[1641]: + daemon_pids+=($!) Sep 16 05:02:45.998297 init.sh[1641]: + for d in accounts clock_skew network Sep 16 05:02:45.998383 init.sh[1730]: + /usr/bin/google_accounts_daemon Sep 16 05:02:45.998752 init.sh[1731]: + /usr/bin/google_clock_skew_daemon Sep 16 05:02:45.999235 init.sh[1641]: + daemon_pids+=($!) Sep 16 05:02:45.999235 init.sh[1641]: + for d in accounts clock_skew network Sep 16 05:02:45.999235 init.sh[1641]: + daemon_pids+=($!) Sep 16 05:02:45.999235 init.sh[1641]: + NOTIFY_SOCKET=/run/systemd/notify Sep 16 05:02:45.999235 init.sh[1641]: + /usr/bin/systemd-notify --ready Sep 16 05:02:45.999424 init.sh[1732]: + /usr/bin/google_network_daemon Sep 16 05:02:46.017736 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 16 05:02:46.028326 init.sh[1641]: + wait -n 1730 1731 1732 Sep 16 05:02:46.053867 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 47656 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:02:46.057231 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:46.071617 systemd-logind[1537]: New session 2 of user core. Sep 16 05:02:46.077010 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 05:02:46.279915 sshd[1734]: Connection closed by 139.178.68.195 port 47656 Sep 16 05:02:46.280382 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:46.295366 systemd[1]: sshd@1-10.128.0.94:22-139.178.68.195:47656.service: Deactivated successfully. Sep 16 05:02:46.302591 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 05:02:46.307292 systemd-logind[1537]: Session 2 logged out. Waiting for processes to exit. Sep 16 05:02:46.311924 systemd-logind[1537]: Removed session 2. Sep 16 05:02:46.339290 systemd[1]: Started sshd@2-10.128.0.94:22-139.178.68.195:47668.service - OpenSSH per-connection server daemon (139.178.68.195:47668). Sep 16 05:02:46.490949 google-clock-skew[1731]: INFO Starting Google Clock Skew daemon. Sep 16 05:02:46.503466 google-clock-skew[1731]: INFO Clock drift token has changed: 0. Sep 16 05:02:46.507836 google-networking[1732]: INFO Starting Google Networking daemon. Sep 16 05:02:46.557037 groupadd[1751]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 16 05:02:46.563280 groupadd[1751]: group added to /etc/gshadow: name=google-sudoers Sep 16 05:02:46.616157 groupadd[1751]: new group: name=google-sudoers, GID=1000 Sep 16 05:02:46.646578 google-accounts[1730]: INFO Starting Google Accounts daemon. Sep 16 05:02:46.661083 google-accounts[1730]: WARNING OS Login not installed. Sep 16 05:02:46.662911 google-accounts[1730]: INFO Creating a new user account for 0. Sep 16 05:02:46.670302 init.sh[1759]: useradd: invalid user name '0': use --badname to ignore Sep 16 05:02:46.671584 google-accounts[1730]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 16 05:02:46.694025 sshd[1746]: Accepted publickey for core from 139.178.68.195 port 47668 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:02:46.696770 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:46.709145 systemd-logind[1537]: New session 3 of user core. Sep 16 05:02:46.714078 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 05:02:46.747194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:02:46.757961 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 05:02:46.761515 (kubelet)[1767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:02:46.767219 systemd[1]: Startup finished in 3.878s (kernel) + 14.420s (initrd) + 9.575s (userspace) = 27.874s. Sep 16 05:02:46.909154 sshd[1765]: Connection closed by 139.178.68.195 port 47668 Sep 16 05:02:46.909995 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:46.917742 systemd[1]: sshd@2-10.128.0.94:22-139.178.68.195:47668.service: Deactivated successfully. Sep 16 05:02:46.921046 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 05:02:46.922550 systemd-logind[1537]: Session 3 logged out. Waiting for processes to exit. Sep 16 05:02:46.925758 systemd-logind[1537]: Removed session 3. Sep 16 05:02:47.000244 systemd-resolved[1440]: Clock change detected. Flushing caches. Sep 16 05:02:47.002103 google-clock-skew[1731]: INFO Synced system time with hardware clock. Sep 16 05:02:47.654922 kubelet[1767]: E0916 05:02:47.654827 1767 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:02:47.658342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:02:47.658664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:02:47.659508 systemd[1]: kubelet.service: Consumed 1.308s CPU time, 266.1M memory peak. Sep 16 05:02:56.952862 systemd[1]: Started sshd@3-10.128.0.94:22-139.178.68.195:47922.service - OpenSSH per-connection server daemon (139.178.68.195:47922). Sep 16 05:02:57.264800 sshd[1783]: Accepted publickey for core from 139.178.68.195 port 47922 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:02:57.266482 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:57.274139 systemd-logind[1537]: New session 4 of user core. Sep 16 05:02:57.276788 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 05:02:57.482131 sshd[1786]: Connection closed by 139.178.68.195 port 47922 Sep 16 05:02:57.483020 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:57.488867 systemd[1]: sshd@3-10.128.0.94:22-139.178.68.195:47922.service: Deactivated successfully. Sep 16 05:02:57.491349 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 05:02:57.492679 systemd-logind[1537]: Session 4 logged out. Waiting for processes to exit. Sep 16 05:02:57.495141 systemd-logind[1537]: Removed session 4. Sep 16 05:02:57.537862 systemd[1]: Started sshd@4-10.128.0.94:22-139.178.68.195:47930.service - OpenSSH per-connection server daemon (139.178.68.195:47930). Sep 16 05:02:57.783332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 05:02:57.787829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:02:57.845542 sshd[1793]: Accepted publickey for core from 139.178.68.195 port 47930 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:02:57.847139 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:57.853631 systemd-logind[1537]: New session 5 of user core. Sep 16 05:02:57.858831 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 05:02:58.054709 sshd[1799]: Connection closed by 139.178.68.195 port 47930 Sep 16 05:02:58.055616 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:58.062246 systemd[1]: sshd@4-10.128.0.94:22-139.178.68.195:47930.service: Deactivated successfully. Sep 16 05:02:58.064861 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 05:02:58.066220 systemd-logind[1537]: Session 5 logged out. Waiting for processes to exit. Sep 16 05:02:58.068215 systemd-logind[1537]: Removed session 5. Sep 16 05:02:58.109259 systemd[1]: Started sshd@5-10.128.0.94:22-139.178.68.195:47934.service - OpenSSH per-connection server daemon (139.178.68.195:47934). Sep 16 05:02:58.157864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:02:58.174143 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:02:58.222885 kubelet[1813]: E0916 05:02:58.222805 1813 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:02:58.227834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:02:58.228078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:02:58.228758 systemd[1]: kubelet.service: Consumed 206ms CPU time, 110.3M memory peak. Sep 16 05:02:58.418325 sshd[1805]: Accepted publickey for core from 139.178.68.195 port 47934 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:02:58.419835 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:58.427631 systemd-logind[1537]: New session 6 of user core. Sep 16 05:02:58.434776 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 05:02:58.630943 sshd[1820]: Connection closed by 139.178.68.195 port 47934 Sep 16 05:02:58.631837 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:58.637908 systemd[1]: sshd@5-10.128.0.94:22-139.178.68.195:47934.service: Deactivated successfully. Sep 16 05:02:58.640762 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 05:02:58.642425 systemd-logind[1537]: Session 6 logged out. Waiting for processes to exit. Sep 16 05:02:58.644620 systemd-logind[1537]: Removed session 6. Sep 16 05:02:58.683042 systemd[1]: Started sshd@6-10.128.0.94:22-139.178.68.195:47950.service - OpenSSH per-connection server daemon (139.178.68.195:47950). Sep 16 05:02:58.988135 sshd[1826]: Accepted publickey for core from 139.178.68.195 port 47950 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:02:58.989965 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:58.997613 systemd-logind[1537]: New session 7 of user core. Sep 16 05:02:59.006853 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 05:02:59.181902 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 05:02:59.182393 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:02:59.195905 sudo[1830]: pam_unix(sudo:session): session closed for user root Sep 16 05:02:59.238728 sshd[1829]: Connection closed by 139.178.68.195 port 47950 Sep 16 05:02:59.240209 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:59.246698 systemd[1]: sshd@6-10.128.0.94:22-139.178.68.195:47950.service: Deactivated successfully. Sep 16 05:02:59.249409 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 05:02:59.250662 systemd-logind[1537]: Session 7 logged out. Waiting for processes to exit. Sep 16 05:02:59.253043 systemd-logind[1537]: Removed session 7. Sep 16 05:02:59.296000 systemd[1]: Started sshd@7-10.128.0.94:22-139.178.68.195:47956.service - OpenSSH per-connection server daemon (139.178.68.195:47956). Sep 16 05:02:59.598805 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 47956 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:02:59.600589 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:59.606640 systemd-logind[1537]: New session 8 of user core. Sep 16 05:02:59.617818 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 05:02:59.777664 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 05:02:59.778154 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:02:59.785025 sudo[1841]: pam_unix(sudo:session): session closed for user root Sep 16 05:02:59.798978 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 05:02:59.799471 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:02:59.812868 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:02:59.859794 augenrules[1863]: No rules Sep 16 05:02:59.860800 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:02:59.861123 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:02:59.862410 sudo[1840]: pam_unix(sudo:session): session closed for user root Sep 16 05:02:59.905264 sshd[1839]: Connection closed by 139.178.68.195 port 47956 Sep 16 05:02:59.906142 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:59.911412 systemd[1]: sshd@7-10.128.0.94:22-139.178.68.195:47956.service: Deactivated successfully. Sep 16 05:02:59.913956 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 05:02:59.917089 systemd-logind[1537]: Session 8 logged out. Waiting for processes to exit. Sep 16 05:02:59.918499 systemd-logind[1537]: Removed session 8. Sep 16 05:02:59.963107 systemd[1]: Started sshd@8-10.128.0.94:22-139.178.68.195:41492.service - OpenSSH per-connection server daemon (139.178.68.195:41492). Sep 16 05:03:00.263437 sshd[1872]: Accepted publickey for core from 139.178.68.195 port 41492 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:03:00.265167 sshd-session[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:00.272513 systemd-logind[1537]: New session 9 of user core. Sep 16 05:03:00.282823 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 05:03:00.440899 sudo[1876]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 05:03:00.441398 sudo[1876]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:03:00.927514 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 05:03:00.957339 (dockerd)[1894]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 05:03:01.314616 dockerd[1894]: time="2025-09-16T05:03:01.313982692Z" level=info msg="Starting up" Sep 16 05:03:01.317944 dockerd[1894]: time="2025-09-16T05:03:01.317902690Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 05:03:01.333971 dockerd[1894]: time="2025-09-16T05:03:01.333908661Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 05:03:01.390145 dockerd[1894]: time="2025-09-16T05:03:01.390059520Z" level=info msg="Loading containers: start." Sep 16 05:03:01.408590 kernel: Initializing XFRM netlink socket Sep 16 05:03:01.765002 systemd-networkd[1439]: docker0: Link UP Sep 16 05:03:01.771239 dockerd[1894]: time="2025-09-16T05:03:01.771179154Z" level=info msg="Loading containers: done." Sep 16 05:03:01.791525 dockerd[1894]: time="2025-09-16T05:03:01.791048370Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 05:03:01.791525 dockerd[1894]: time="2025-09-16T05:03:01.791173835Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 05:03:01.791525 dockerd[1894]: time="2025-09-16T05:03:01.791296196Z" level=info msg="Initializing buildkit" Sep 16 05:03:01.794041 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3659366701-merged.mount: Deactivated successfully. Sep 16 05:03:01.824785 dockerd[1894]: time="2025-09-16T05:03:01.824711538Z" level=info msg="Completed buildkit initialization" Sep 16 05:03:01.834494 dockerd[1894]: time="2025-09-16T05:03:01.834418152Z" level=info msg="Daemon has completed initialization" Sep 16 05:03:01.834674 dockerd[1894]: time="2025-09-16T05:03:01.834491765Z" level=info msg="API listen on /run/docker.sock" Sep 16 05:03:01.834972 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 05:03:02.779789 containerd[1553]: time="2025-09-16T05:03:02.779734883Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 16 05:03:03.252093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105138004.mount: Deactivated successfully. Sep 16 05:03:05.047143 containerd[1553]: time="2025-09-16T05:03:05.047045569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:05.048694 containerd[1553]: time="2025-09-16T05:03:05.048409542Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30122476" Sep 16 05:03:05.049851 containerd[1553]: time="2025-09-16T05:03:05.049807009Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:05.053185 containerd[1553]: time="2025-09-16T05:03:05.053117031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:05.057165 containerd[1553]: time="2025-09-16T05:03:05.057115518Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.277329856s" Sep 16 05:03:05.058578 containerd[1553]: time="2025-09-16T05:03:05.057308152Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 16 05:03:05.060165 containerd[1553]: time="2025-09-16T05:03:05.060135925Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 16 05:03:06.648774 containerd[1553]: time="2025-09-16T05:03:06.648703884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:06.650261 containerd[1553]: time="2025-09-16T05:03:06.650216199Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26022778" Sep 16 05:03:06.651606 containerd[1553]: time="2025-09-16T05:03:06.651326940Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:06.654491 containerd[1553]: time="2025-09-16T05:03:06.654427379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:06.655962 containerd[1553]: time="2025-09-16T05:03:06.655800842Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.595510888s" Sep 16 05:03:06.655962 containerd[1553]: time="2025-09-16T05:03:06.655844710Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 16 05:03:06.656666 containerd[1553]: time="2025-09-16T05:03:06.656623229Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 16 05:03:08.028450 containerd[1553]: time="2025-09-16T05:03:08.028374107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:08.029939 containerd[1553]: time="2025-09-16T05:03:08.029688520Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20157484" Sep 16 05:03:08.031155 containerd[1553]: time="2025-09-16T05:03:08.031107874Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:08.034347 containerd[1553]: time="2025-09-16T05:03:08.034310232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:08.036130 containerd[1553]: time="2025-09-16T05:03:08.035742065Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.379079943s" Sep 16 05:03:08.036130 containerd[1553]: time="2025-09-16T05:03:08.035787227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 16 05:03:08.037057 containerd[1553]: time="2025-09-16T05:03:08.037011839Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 16 05:03:08.302656 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 05:03:08.305646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:08.748770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:08.763091 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:03:08.865170 kubelet[2180]: E0916 05:03:08.865115 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:03:08.869476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:03:08.870408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:03:08.871273 systemd[1]: kubelet.service: Consumed 235ms CPU time, 108.4M memory peak. Sep 16 05:03:09.333757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171908674.mount: Deactivated successfully. Sep 16 05:03:10.098220 containerd[1553]: time="2025-09-16T05:03:10.098147724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:10.099900 containerd[1553]: time="2025-09-16T05:03:10.099613765Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31931364" Sep 16 05:03:10.101119 containerd[1553]: time="2025-09-16T05:03:10.101070772Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:10.107897 containerd[1553]: time="2025-09-16T05:03:10.107846557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:10.109266 containerd[1553]: time="2025-09-16T05:03:10.109219649Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.072165338s" Sep 16 05:03:10.109450 containerd[1553]: time="2025-09-16T05:03:10.109423765Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 16 05:03:10.110773 containerd[1553]: time="2025-09-16T05:03:10.110709399Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 16 05:03:10.524086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325186235.mount: Deactivated successfully. Sep 16 05:03:11.789668 containerd[1553]: time="2025-09-16T05:03:11.789593422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:11.791188 containerd[1553]: time="2025-09-16T05:03:11.791139176Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20948880" Sep 16 05:03:11.792641 containerd[1553]: time="2025-09-16T05:03:11.792338190Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:11.796533 containerd[1553]: time="2025-09-16T05:03:11.796010521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:11.797429 containerd[1553]: time="2025-09-16T05:03:11.797374602Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.686622957s" Sep 16 05:03:11.797429 containerd[1553]: time="2025-09-16T05:03:11.797426301Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 16 05:03:11.798354 containerd[1553]: time="2025-09-16T05:03:11.798321057Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 05:03:12.183283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289760537.mount: Deactivated successfully. Sep 16 05:03:12.189729 containerd[1553]: time="2025-09-16T05:03:12.189670196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:03:12.190741 containerd[1553]: time="2025-09-16T05:03:12.190686419Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 16 05:03:12.192568 containerd[1553]: time="2025-09-16T05:03:12.191705413Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:03:12.196814 containerd[1553]: time="2025-09-16T05:03:12.196774045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:03:12.197828 containerd[1553]: time="2025-09-16T05:03:12.197783413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 399.417612ms" Sep 16 05:03:12.197932 containerd[1553]: time="2025-09-16T05:03:12.197833016Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 05:03:12.198820 containerd[1553]: time="2025-09-16T05:03:12.198766300Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 16 05:03:12.590899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451631687.mount: Deactivated successfully. Sep 16 05:03:14.859872 containerd[1553]: time="2025-09-16T05:03:14.859795954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:14.861534 containerd[1553]: time="2025-09-16T05:03:14.861379137Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58384071" Sep 16 05:03:14.862714 containerd[1553]: time="2025-09-16T05:03:14.862665745Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:14.866113 containerd[1553]: time="2025-09-16T05:03:14.866041230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:14.867772 containerd[1553]: time="2025-09-16T05:03:14.867497394Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.668693041s" Sep 16 05:03:14.867772 containerd[1553]: time="2025-09-16T05:03:14.867546611Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 16 05:03:15.046544 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 16 05:03:17.895508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:17.895861 systemd[1]: kubelet.service: Consumed 235ms CPU time, 108.4M memory peak. Sep 16 05:03:17.899121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:17.945796 systemd[1]: Reload requested from client PID 2334 ('systemctl') (unit session-9.scope)... Sep 16 05:03:17.945818 systemd[1]: Reloading... Sep 16 05:03:18.136806 zram_generator::config[2378]: No configuration found. Sep 16 05:03:18.467922 systemd[1]: Reloading finished in 521 ms. Sep 16 05:03:18.543489 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 05:03:18.543650 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 05:03:18.544034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:18.544112 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.3M memory peak. Sep 16 05:03:18.547072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:19.442962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:19.455211 (kubelet)[2429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 05:03:19.523639 kubelet[2429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:19.524182 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 05:03:19.524735 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:19.524735 kubelet[2429]: I0916 05:03:19.524405 2429 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 05:03:19.782989 kubelet[2429]: I0916 05:03:19.782513 2429 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 05:03:19.782989 kubelet[2429]: I0916 05:03:19.782550 2429 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 05:03:19.783191 kubelet[2429]: I0916 05:03:19.783113 2429 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 05:03:19.834169 kubelet[2429]: E0916 05:03:19.834098 2429 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 16 05:03:19.836918 kubelet[2429]: I0916 05:03:19.836626 2429 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:03:19.853050 kubelet[2429]: I0916 05:03:19.853011 2429 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 05:03:19.858759 kubelet[2429]: I0916 05:03:19.858701 2429 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 05:03:19.859118 kubelet[2429]: I0916 05:03:19.859057 2429 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 05:03:19.859598 kubelet[2429]: I0916 05:03:19.859101 2429 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 05:03:19.859598 kubelet[2429]: I0916 05:03:19.859361 2429 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 05:03:19.859598 kubelet[2429]: I0916 05:03:19.859377 2429 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 05:03:19.863293 kubelet[2429]: I0916 05:03:19.863246 2429 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:19.868602 kubelet[2429]: I0916 05:03:19.868292 2429 kubelet.go:480] "Attempting to sync node with API server" Sep 16 05:03:19.868602 kubelet[2429]: I0916 05:03:19.868336 2429 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 05:03:19.871180 kubelet[2429]: I0916 05:03:19.871068 2429 kubelet.go:386] "Adding apiserver pod source" Sep 16 05:03:19.873467 kubelet[2429]: I0916 05:03:19.873439 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 05:03:19.882580 kubelet[2429]: E0916 05:03:19.881659 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f&limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 16 05:03:19.882580 kubelet[2429]: I0916 05:03:19.881851 2429 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 05:03:19.882758 kubelet[2429]: I0916 05:03:19.882671 2429 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 05:03:19.884660 kubelet[2429]: W0916 05:03:19.884578 2429 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 05:03:19.900591 kubelet[2429]: E0916 05:03:19.899657 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 16 05:03:19.905132 kubelet[2429]: I0916 05:03:19.905089 2429 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 05:03:19.905263 kubelet[2429]: I0916 05:03:19.905185 2429 server.go:1289] "Started kubelet" Sep 16 05:03:19.908574 kubelet[2429]: I0916 05:03:19.908534 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 05:03:19.913775 kubelet[2429]: I0916 05:03:19.908861 2429 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 05:03:19.915027 kubelet[2429]: I0916 05:03:19.914948 2429 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 05:03:19.915757 kubelet[2429]: I0916 05:03:19.915691 2429 server.go:317] "Adding debug handlers to kubelet server" Sep 16 05:03:19.918018 kubelet[2429]: I0916 05:03:19.917932 2429 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 05:03:19.918460 kubelet[2429]: E0916 05:03:19.918425 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" Sep 16 05:03:19.919576 kubelet[2429]: I0916 05:03:19.908928 2429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 05:03:19.919867 kubelet[2429]: I0916 05:03:19.919841 2429 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 05:03:19.923135 kubelet[2429]: E0916 05:03:19.923095 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f?timeout=10s\": dial tcp 10.128.0.94:6443: connect: connection refused" interval="200ms" Sep 16 05:03:19.924281 kubelet[2429]: I0916 05:03:19.924209 2429 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 05:03:19.924382 kubelet[2429]: I0916 05:03:19.924305 2429 reconciler.go:26] "Reconciler: start to sync state" Sep 16 05:03:19.927575 kubelet[2429]: I0916 05:03:19.927476 2429 factory.go:223] Registration of the containerd container factory successfully Sep 16 05:03:19.927575 kubelet[2429]: I0916 05:03:19.927538 2429 factory.go:223] Registration of the systemd container factory successfully Sep 16 05:03:19.927726 kubelet[2429]: E0916 05:03:19.924993 2429 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.94:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.94:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f.1865aac6b0d6328b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,UID:ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,},FirstTimestamp:2025-09-16 05:03:19.905120907 +0000 UTC m=+0.442940094,LastTimestamp:2025-09-16 05:03:19.905120907 +0000 UTC m=+0.442940094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,}" Sep 16 05:03:19.928978 kubelet[2429]: I0916 05:03:19.928900 2429 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 05:03:19.930104 kubelet[2429]: E0916 05:03:19.930028 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 16 05:03:19.935858 kubelet[2429]: E0916 05:03:19.935827 2429 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 05:03:19.961473 kubelet[2429]: I0916 05:03:19.961447 2429 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 05:03:19.961642 kubelet[2429]: I0916 05:03:19.961618 2429 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 05:03:19.961711 kubelet[2429]: I0916 05:03:19.961646 2429 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:19.965810 kubelet[2429]: I0916 05:03:19.965579 2429 policy_none.go:49] "None policy: Start" Sep 16 05:03:19.966612 kubelet[2429]: I0916 05:03:19.965794 2429 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 05:03:19.966708 kubelet[2429]: I0916 05:03:19.966627 2429 state_mem.go:35] "Initializing new in-memory state store" Sep 16 05:03:19.968914 kubelet[2429]: I0916 05:03:19.968747 2429 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 05:03:19.971025 kubelet[2429]: I0916 05:03:19.970988 2429 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 05:03:19.971166 kubelet[2429]: I0916 05:03:19.971152 2429 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 05:03:19.971266 kubelet[2429]: I0916 05:03:19.971254 2429 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 05:03:19.971348 kubelet[2429]: I0916 05:03:19.971336 2429 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 05:03:19.971480 kubelet[2429]: E0916 05:03:19.971458 2429 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 05:03:19.977585 kubelet[2429]: E0916 05:03:19.977461 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 16 05:03:19.984785 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 05:03:19.995199 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 05:03:20.000792 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 05:03:20.008797 kubelet[2429]: E0916 05:03:20.008768 2429 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 05:03:20.009121 kubelet[2429]: I0916 05:03:20.009105 2429 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 05:03:20.009243 kubelet[2429]: I0916 05:03:20.009211 2429 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 05:03:20.010180 kubelet[2429]: I0916 05:03:20.010084 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 05:03:20.014046 kubelet[2429]: E0916 05:03:20.014013 2429 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 05:03:20.014292 kubelet[2429]: E0916 05:03:20.014090 2429 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" Sep 16 05:03:20.101160 systemd[1]: Created slice kubepods-burstable-pod99d46631b20e62c30d46cd99aaa6fe1d.slice - libcontainer container kubepods-burstable-pod99d46631b20e62c30d46cd99aaa6fe1d.slice. Sep 16 05:03:20.110748 kubelet[2429]: E0916 05:03:20.110713 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.117074 systemd[1]: Created slice kubepods-burstable-podf3f93be7e53ea7715f23258881bef8ca.slice - libcontainer container kubepods-burstable-podf3f93be7e53ea7715f23258881bef8ca.slice. Sep 16 05:03:20.120122 kubelet[2429]: I0916 05:03:20.120077 2429 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.120599 kubelet[2429]: E0916 05:03:20.120531 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.94:6443/api/v1/nodes\": dial tcp 10.128.0.94:6443: connect: connection refused" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.123902 kubelet[2429]: E0916 05:03:20.123857 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f?timeout=10s\": dial tcp 10.128.0.94:6443: connect: connection refused" interval="400ms" Sep 16 05:03:20.125018 kubelet[2429]: E0916 05:03:20.124979 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.129177 systemd[1]: Created slice kubepods-burstable-pod662ff5d06ad5857e7a88b00484f0cf03.slice - libcontainer container kubepods-burstable-pod662ff5d06ad5857e7a88b00484f0cf03.slice. Sep 16 05:03:20.131869 kubelet[2429]: E0916 05:03:20.131817 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.226038 kubelet[2429]: I0916 05:03:20.225975 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3f93be7e53ea7715f23258881bef8ca-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"f3f93be7e53ea7715f23258881bef8ca\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.226249 kubelet[2429]: I0916 05:03:20.226081 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3f93be7e53ea7715f23258881bef8ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"f3f93be7e53ea7715f23258881bef8ca\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.226249 kubelet[2429]: I0916 05:03:20.226125 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f3f93be7e53ea7715f23258881bef8ca-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"f3f93be7e53ea7715f23258881bef8ca\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.226249 kubelet[2429]: I0916 05:03:20.226151 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/662ff5d06ad5857e7a88b00484f0cf03-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"662ff5d06ad5857e7a88b00484f0cf03\") " pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.226249 kubelet[2429]: I0916 05:03:20.226182 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99d46631b20e62c30d46cd99aaa6fe1d-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"99d46631b20e62c30d46cd99aaa6fe1d\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.226462 kubelet[2429]: I0916 05:03:20.226207 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99d46631b20e62c30d46cd99aaa6fe1d-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"99d46631b20e62c30d46cd99aaa6fe1d\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.226462 kubelet[2429]: I0916 05:03:20.226232 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99d46631b20e62c30d46cd99aaa6fe1d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"99d46631b20e62c30d46cd99aaa6fe1d\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.226462 kubelet[2429]: I0916 05:03:20.226262 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3f93be7e53ea7715f23258881bef8ca-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"f3f93be7e53ea7715f23258881bef8ca\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.226462 kubelet[2429]: I0916 05:03:20.226307 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3f93be7e53ea7715f23258881bef8ca-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"f3f93be7e53ea7715f23258881bef8ca\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.326308 kubelet[2429]: I0916 05:03:20.325887 2429 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.326491 kubelet[2429]: E0916 05:03:20.326351 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.94:6443/api/v1/nodes\": dial tcp 10.128.0.94:6443: connect: connection refused" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.413531 containerd[1553]: time="2025-09-16T05:03:20.413372449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,Uid:99d46631b20e62c30d46cd99aaa6fe1d,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:20.428888 containerd[1553]: time="2025-09-16T05:03:20.428287411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,Uid:f3f93be7e53ea7715f23258881bef8ca,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:20.433509 containerd[1553]: time="2025-09-16T05:03:20.433446077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,Uid:662ff5d06ad5857e7a88b00484f0cf03,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:20.468591 containerd[1553]: time="2025-09-16T05:03:20.467644652Z" level=info msg="connecting to shim e44b8ffed5d807102634db1e73902475919036eb70b1f69e2b93f1720a68d8e4" address="unix:///run/containerd/s/a6bf89168b1d51b9c84e7689e00415255a80c935bd76b4738ec87ee541fc6648" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:20.490800 containerd[1553]: time="2025-09-16T05:03:20.490750426Z" level=info msg="connecting to shim af4ea45dd2be88e95e665ec8c6779f692dc732093e85c87321757a2177f4b10c" address="unix:///run/containerd/s/eb3b3c898c0b8deb55d350a74065bdb7b3f0bb24df2ecf467eda95b6148ef336" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:20.518653 containerd[1553]: time="2025-09-16T05:03:20.518599835Z" level=info msg="connecting to shim 72a1d70c671a600fa0e3426871bb1657b87361a8413b6276c2db4d51cf8799fe" address="unix:///run/containerd/s/304e615dc27a7163b6066d7eace115a32debd9a05643929dcdd776c15b79100f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:20.525862 kubelet[2429]: E0916 05:03:20.525805 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f?timeout=10s\": dial tcp 10.128.0.94:6443: connect: connection refused" interval="800ms" Sep 16 05:03:20.554267 systemd[1]: Started cri-containerd-af4ea45dd2be88e95e665ec8c6779f692dc732093e85c87321757a2177f4b10c.scope - libcontainer container af4ea45dd2be88e95e665ec8c6779f692dc732093e85c87321757a2177f4b10c. Sep 16 05:03:20.562318 systemd[1]: Started cri-containerd-e44b8ffed5d807102634db1e73902475919036eb70b1f69e2b93f1720a68d8e4.scope - libcontainer container e44b8ffed5d807102634db1e73902475919036eb70b1f69e2b93f1720a68d8e4. Sep 16 05:03:20.598849 systemd[1]: Started cri-containerd-72a1d70c671a600fa0e3426871bb1657b87361a8413b6276c2db4d51cf8799fe.scope - libcontainer container 72a1d70c671a600fa0e3426871bb1657b87361a8413b6276c2db4d51cf8799fe. Sep 16 05:03:20.700349 containerd[1553]: time="2025-09-16T05:03:20.699620318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,Uid:662ff5d06ad5857e7a88b00484f0cf03,Namespace:kube-system,Attempt:0,} returns sandbox id \"72a1d70c671a600fa0e3426871bb1657b87361a8413b6276c2db4d51cf8799fe\"" Sep 16 05:03:20.707217 containerd[1553]: time="2025-09-16T05:03:20.706173009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,Uid:99d46631b20e62c30d46cd99aaa6fe1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e44b8ffed5d807102634db1e73902475919036eb70b1f69e2b93f1720a68d8e4\"" Sep 16 05:03:20.707349 kubelet[2429]: E0916 05:03:20.706795 2429 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae44694" Sep 16 05:03:20.709645 kubelet[2429]: E0916 05:03:20.709593 2429 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae44694" Sep 16 05:03:20.711672 containerd[1553]: time="2025-09-16T05:03:20.711631743Z" level=info msg="CreateContainer within sandbox \"72a1d70c671a600fa0e3426871bb1657b87361a8413b6276c2db4d51cf8799fe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 05:03:20.714874 containerd[1553]: time="2025-09-16T05:03:20.714827430Z" level=info msg="CreateContainer within sandbox \"e44b8ffed5d807102634db1e73902475919036eb70b1f69e2b93f1720a68d8e4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 05:03:20.726583 containerd[1553]: time="2025-09-16T05:03:20.725459002Z" level=info msg="Container fc7190443cdbd49746e6ac8c500f5eef65e986d4c81738b656be26aeac9de6f9: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:20.733326 kubelet[2429]: I0916 05:03:20.732876 2429 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.733326 kubelet[2429]: E0916 05:03:20.733285 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.94:6443/api/v1/nodes\": dial tcp 10.128.0.94:6443: connect: connection refused" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:20.736478 containerd[1553]: time="2025-09-16T05:03:20.736379820Z" level=info msg="Container cc6d61b6adf1c1e2065d26e404daf738ff22bda5721f071d7b8bf4d7fe8aaf2a: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:20.739285 containerd[1553]: time="2025-09-16T05:03:20.739246851Z" level=info msg="CreateContainer within sandbox \"72a1d70c671a600fa0e3426871bb1657b87361a8413b6276c2db4d51cf8799fe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc7190443cdbd49746e6ac8c500f5eef65e986d4c81738b656be26aeac9de6f9\"" Sep 16 05:03:20.740500 containerd[1553]: time="2025-09-16T05:03:20.740465660Z" level=info msg="StartContainer for \"fc7190443cdbd49746e6ac8c500f5eef65e986d4c81738b656be26aeac9de6f9\"" Sep 16 05:03:20.740707 containerd[1553]: time="2025-09-16T05:03:20.740485953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,Uid:f3f93be7e53ea7715f23258881bef8ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"af4ea45dd2be88e95e665ec8c6779f692dc732093e85c87321757a2177f4b10c\"" Sep 16 05:03:20.742582 kubelet[2429]: E0916 05:03:20.742529 2429 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1c" Sep 16 05:03:20.743094 containerd[1553]: time="2025-09-16T05:03:20.743057137Z" level=info msg="connecting to shim fc7190443cdbd49746e6ac8c500f5eef65e986d4c81738b656be26aeac9de6f9" address="unix:///run/containerd/s/304e615dc27a7163b6066d7eace115a32debd9a05643929dcdd776c15b79100f" protocol=ttrpc version=3 Sep 16 05:03:20.747971 containerd[1553]: time="2025-09-16T05:03:20.747933875Z" level=info msg="CreateContainer within sandbox \"af4ea45dd2be88e95e665ec8c6779f692dc732093e85c87321757a2177f4b10c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 05:03:20.751020 containerd[1553]: time="2025-09-16T05:03:20.750938609Z" level=info msg="CreateContainer within sandbox \"e44b8ffed5d807102634db1e73902475919036eb70b1f69e2b93f1720a68d8e4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cc6d61b6adf1c1e2065d26e404daf738ff22bda5721f071d7b8bf4d7fe8aaf2a\"" Sep 16 05:03:20.752575 containerd[1553]: time="2025-09-16T05:03:20.752518916Z" level=info msg="StartContainer for \"cc6d61b6adf1c1e2065d26e404daf738ff22bda5721f071d7b8bf4d7fe8aaf2a\"" Sep 16 05:03:20.755438 containerd[1553]: time="2025-09-16T05:03:20.755399161Z" level=info msg="connecting to shim cc6d61b6adf1c1e2065d26e404daf738ff22bda5721f071d7b8bf4d7fe8aaf2a" address="unix:///run/containerd/s/a6bf89168b1d51b9c84e7689e00415255a80c935bd76b4738ec87ee541fc6648" protocol=ttrpc version=3 Sep 16 05:03:20.761451 containerd[1553]: time="2025-09-16T05:03:20.761396288Z" level=info msg="Container fe6f7219202192cc948d8e27f2bf3eeabfd79490df208a6a71e066689dc73f31: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:20.782105 containerd[1553]: time="2025-09-16T05:03:20.782066776Z" level=info msg="CreateContainer within sandbox \"af4ea45dd2be88e95e665ec8c6779f692dc732093e85c87321757a2177f4b10c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fe6f7219202192cc948d8e27f2bf3eeabfd79490df208a6a71e066689dc73f31\"" Sep 16 05:03:20.782845 systemd[1]: Started cri-containerd-fc7190443cdbd49746e6ac8c500f5eef65e986d4c81738b656be26aeac9de6f9.scope - libcontainer container fc7190443cdbd49746e6ac8c500f5eef65e986d4c81738b656be26aeac9de6f9. Sep 16 05:03:20.786423 kubelet[2429]: E0916 05:03:20.785544 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 16 05:03:20.787585 containerd[1553]: time="2025-09-16T05:03:20.787267638Z" level=info msg="StartContainer for \"fe6f7219202192cc948d8e27f2bf3eeabfd79490df208a6a71e066689dc73f31\"" Sep 16 05:03:20.792021 containerd[1553]: time="2025-09-16T05:03:20.790226500Z" level=info msg="connecting to shim fe6f7219202192cc948d8e27f2bf3eeabfd79490df208a6a71e066689dc73f31" address="unix:///run/containerd/s/eb3b3c898c0b8deb55d350a74065bdb7b3f0bb24df2ecf467eda95b6148ef336" protocol=ttrpc version=3 Sep 16 05:03:20.798527 systemd[1]: Started cri-containerd-cc6d61b6adf1c1e2065d26e404daf738ff22bda5721f071d7b8bf4d7fe8aaf2a.scope - libcontainer container cc6d61b6adf1c1e2065d26e404daf738ff22bda5721f071d7b8bf4d7fe8aaf2a. Sep 16 05:03:20.841805 systemd[1]: Started cri-containerd-fe6f7219202192cc948d8e27f2bf3eeabfd79490df208a6a71e066689dc73f31.scope - libcontainer container fe6f7219202192cc948d8e27f2bf3eeabfd79490df208a6a71e066689dc73f31. Sep 16 05:03:20.857343 kubelet[2429]: E0916 05:03:20.857173 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f&limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 16 05:03:20.938592 containerd[1553]: time="2025-09-16T05:03:20.936958562Z" level=info msg="StartContainer for \"fc7190443cdbd49746e6ac8c500f5eef65e986d4c81738b656be26aeac9de6f9\" returns successfully" Sep 16 05:03:20.995384 kubelet[2429]: E0916 05:03:20.995228 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 16 05:03:20.999800 containerd[1553]: time="2025-09-16T05:03:20.999749161Z" level=info msg="StartContainer for \"cc6d61b6adf1c1e2065d26e404daf738ff22bda5721f071d7b8bf4d7fe8aaf2a\" returns successfully" Sep 16 05:03:21.020758 kubelet[2429]: E0916 05:03:21.020690 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:21.041953 kubelet[2429]: E0916 05:03:21.041896 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:21.042854 containerd[1553]: time="2025-09-16T05:03:21.042805639Z" level=info msg="StartContainer for \"fe6f7219202192cc948d8e27f2bf3eeabfd79490df208a6a71e066689dc73f31\" returns successfully" Sep 16 05:03:21.543145 kubelet[2429]: I0916 05:03:21.543101 2429 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:22.049361 kubelet[2429]: E0916 05:03:22.048802 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:22.049361 kubelet[2429]: E0916 05:03:22.049165 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:22.050239 kubelet[2429]: E0916 05:03:22.050213 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:23.052016 kubelet[2429]: E0916 05:03:23.051960 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:23.055587 kubelet[2429]: E0916 05:03:23.053888 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:23.055587 kubelet[2429]: E0916 05:03:23.054322 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:25.668315 kubelet[2429]: E0916 05:03:25.668251 2429 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" not found" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:25.683870 kubelet[2429]: E0916 05:03:25.683506 2429 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f.1865aac6b0d6328b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,UID:ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,},FirstTimestamp:2025-09-16 05:03:19.905120907 +0000 UTC m=+0.442940094,LastTimestamp:2025-09-16 05:03:19.905120907 +0000 UTC m=+0.442940094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f,}" Sep 16 05:03:25.728584 kubelet[2429]: I0916 05:03:25.727677 2429 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:25.820538 kubelet[2429]: I0916 05:03:25.820478 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:25.835892 kubelet[2429]: E0916 05:03:25.835650 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:25.835892 kubelet[2429]: I0916 05:03:25.835697 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:25.839939 kubelet[2429]: E0916 05:03:25.839744 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:25.839939 kubelet[2429]: I0916 05:03:25.839785 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:25.843678 kubelet[2429]: E0916 05:03:25.843634 2429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:25.901238 kubelet[2429]: I0916 05:03:25.901182 2429 apiserver.go:52] "Watching apiserver" Sep 16 05:03:25.925737 kubelet[2429]: I0916 05:03:25.924694 2429 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 05:03:27.442486 systemd[1]: Reload requested from client PID 2706 ('systemctl') (unit session-9.scope)... Sep 16 05:03:27.442510 systemd[1]: Reloading... Sep 16 05:03:27.620597 zram_generator::config[2750]: No configuration found. Sep 16 05:03:27.967479 systemd[1]: Reloading finished in 524 ms. Sep 16 05:03:28.003525 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:28.023382 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 05:03:28.024090 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:28.024217 systemd[1]: kubelet.service: Consumed 1.007s CPU time, 134.9M memory peak. Sep 16 05:03:28.028207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:28.377247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:28.395228 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 05:03:28.477597 kubelet[2798]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:28.477597 kubelet[2798]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 05:03:28.477597 kubelet[2798]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:03:28.477597 kubelet[2798]: I0916 05:03:28.477210 2798 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 05:03:28.488920 kubelet[2798]: I0916 05:03:28.488868 2798 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 05:03:28.488920 kubelet[2798]: I0916 05:03:28.488897 2798 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 05:03:28.489311 kubelet[2798]: I0916 05:03:28.489271 2798 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 05:03:28.491481 kubelet[2798]: I0916 05:03:28.491442 2798 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 16 05:03:28.495612 kubelet[2798]: I0916 05:03:28.494611 2798 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:03:28.508539 kubelet[2798]: I0916 05:03:28.508506 2798 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 05:03:28.513577 kubelet[2798]: I0916 05:03:28.513181 2798 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 05:03:28.513577 kubelet[2798]: I0916 05:03:28.513495 2798 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 05:03:28.514048 kubelet[2798]: I0916 05:03:28.513534 2798 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 05:03:28.514048 kubelet[2798]: I0916 05:03:28.514048 2798 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 05:03:28.514281 kubelet[2798]: I0916 05:03:28.514067 2798 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 05:03:28.514281 kubelet[2798]: I0916 05:03:28.514132 2798 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:28.514396 kubelet[2798]: I0916 05:03:28.514372 2798 kubelet.go:480] "Attempting to sync node with API server" Sep 16 05:03:28.515408 kubelet[2798]: I0916 05:03:28.514391 2798 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 05:03:28.515408 kubelet[2798]: I0916 05:03:28.515215 2798 kubelet.go:386] "Adding apiserver pod source" Sep 16 05:03:28.515408 kubelet[2798]: I0916 05:03:28.515237 2798 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 05:03:28.521357 kubelet[2798]: I0916 05:03:28.517946 2798 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 05:03:28.521357 kubelet[2798]: I0916 05:03:28.518806 2798 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 05:03:28.562598 kubelet[2798]: I0916 05:03:28.561206 2798 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 05:03:28.562598 kubelet[2798]: I0916 05:03:28.561297 2798 server.go:1289] "Started kubelet" Sep 16 05:03:28.564039 kubelet[2798]: I0916 05:03:28.564015 2798 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 05:03:28.571615 kubelet[2798]: I0916 05:03:28.571568 2798 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 05:03:28.573404 kubelet[2798]: I0916 05:03:28.573381 2798 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 05:03:28.575315 kubelet[2798]: I0916 05:03:28.575284 2798 server.go:317] "Adding debug handlers to kubelet server" Sep 16 05:03:28.576921 kubelet[2798]: I0916 05:03:28.575936 2798 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 05:03:28.579569 kubelet[2798]: I0916 05:03:28.576515 2798 reconciler.go:26] "Reconciler: start to sync state" Sep 16 05:03:28.581874 kubelet[2798]: I0916 05:03:28.581158 2798 factory.go:223] Registration of the systemd container factory successfully Sep 16 05:03:28.581874 kubelet[2798]: I0916 05:03:28.581287 2798 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 05:03:28.587834 kubelet[2798]: E0916 05:03:28.587748 2798 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 05:03:28.591326 kubelet[2798]: I0916 05:03:28.591217 2798 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 05:03:28.591792 kubelet[2798]: I0916 05:03:28.591763 2798 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 05:03:28.592039 kubelet[2798]: I0916 05:03:28.592010 2798 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 05:03:28.597743 kubelet[2798]: I0916 05:03:28.597326 2798 factory.go:223] Registration of the containerd container factory successfully Sep 16 05:03:28.635242 kubelet[2798]: I0916 05:03:28.635117 2798 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 05:03:28.642643 kubelet[2798]: I0916 05:03:28.642218 2798 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 05:03:28.642643 kubelet[2798]: I0916 05:03:28.642259 2798 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 05:03:28.642643 kubelet[2798]: I0916 05:03:28.642287 2798 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 05:03:28.642643 kubelet[2798]: I0916 05:03:28.642300 2798 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 05:03:28.642643 kubelet[2798]: E0916 05:03:28.642369 2798 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 05:03:28.707159 kubelet[2798]: I0916 05:03:28.707123 2798 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 05:03:28.707159 kubelet[2798]: I0916 05:03:28.707145 2798 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 05:03:28.707406 kubelet[2798]: I0916 05:03:28.707180 2798 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:03:28.707406 kubelet[2798]: I0916 05:03:28.707379 2798 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 05:03:28.707406 kubelet[2798]: I0916 05:03:28.707395 2798 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 05:03:28.707600 kubelet[2798]: I0916 05:03:28.707419 2798 policy_none.go:49] "None policy: Start" Sep 16 05:03:28.707600 kubelet[2798]: I0916 05:03:28.707435 2798 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 05:03:28.707600 kubelet[2798]: I0916 05:03:28.707451 2798 state_mem.go:35] "Initializing new in-memory state store" Sep 16 05:03:28.707747 kubelet[2798]: I0916 05:03:28.707633 2798 state_mem.go:75] "Updated machine memory state" Sep 16 05:03:28.718313 kubelet[2798]: E0916 05:03:28.718255 2798 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 05:03:28.721656 kubelet[2798]: I0916 05:03:28.718995 2798 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 05:03:28.721656 kubelet[2798]: I0916 05:03:28.719019 2798 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 05:03:28.721656 kubelet[2798]: I0916 05:03:28.719278 2798 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 05:03:28.725247 kubelet[2798]: E0916 05:03:28.725191 2798 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 05:03:28.744135 kubelet[2798]: I0916 05:03:28.744085 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.745221 kubelet[2798]: I0916 05:03:28.745197 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.746367 kubelet[2798]: I0916 05:03:28.746261 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.761326 kubelet[2798]: I0916 05:03:28.761285 2798 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 16 05:03:28.764012 kubelet[2798]: I0916 05:03:28.761651 2798 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 16 05:03:28.764816 kubelet[2798]: I0916 05:03:28.764499 2798 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 16 05:03:28.781399 kubelet[2798]: I0916 05:03:28.780616 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3f93be7e53ea7715f23258881bef8ca-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"f3f93be7e53ea7715f23258881bef8ca\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.781399 kubelet[2798]: I0916 05:03:28.780685 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3f93be7e53ea7715f23258881bef8ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"f3f93be7e53ea7715f23258881bef8ca\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.781399 kubelet[2798]: I0916 05:03:28.780719 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f3f93be7e53ea7715f23258881bef8ca-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"f3f93be7e53ea7715f23258881bef8ca\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.781399 kubelet[2798]: I0916 05:03:28.780753 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3f93be7e53ea7715f23258881bef8ca-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"f3f93be7e53ea7715f23258881bef8ca\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.781668 kubelet[2798]: I0916 05:03:28.780782 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3f93be7e53ea7715f23258881bef8ca-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"f3f93be7e53ea7715f23258881bef8ca\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.781668 kubelet[2798]: I0916 05:03:28.780814 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/662ff5d06ad5857e7a88b00484f0cf03-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"662ff5d06ad5857e7a88b00484f0cf03\") " pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.781668 kubelet[2798]: I0916 05:03:28.780842 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99d46631b20e62c30d46cd99aaa6fe1d-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"99d46631b20e62c30d46cd99aaa6fe1d\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.781668 kubelet[2798]: I0916 05:03:28.780870 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99d46631b20e62c30d46cd99aaa6fe1d-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"99d46631b20e62c30d46cd99aaa6fe1d\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.781802 kubelet[2798]: I0916 05:03:28.780898 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99d46631b20e62c30d46cd99aaa6fe1d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" (UID: \"99d46631b20e62c30d46cd99aaa6fe1d\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.838375 kubelet[2798]: I0916 05:03:28.838066 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.850279 kubelet[2798]: I0916 05:03:28.850072 2798 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.850279 kubelet[2798]: I0916 05:03:28.850218 2798 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:28.878686 update_engine[1540]: I20250916 05:03:28.878605 1540 update_attempter.cc:509] Updating boot flags... Sep 16 05:03:29.517288 kubelet[2798]: I0916 05:03:29.517239 2798 apiserver.go:52] "Watching apiserver" Sep 16 05:03:29.579728 kubelet[2798]: I0916 05:03:29.579676 2798 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 05:03:29.680660 kubelet[2798]: I0916 05:03:29.680619 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:29.681041 kubelet[2798]: I0916 05:03:29.680989 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:29.681775 kubelet[2798]: I0916 05:03:29.681653 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:29.697592 kubelet[2798]: I0916 05:03:29.697529 2798 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 16 05:03:29.697763 kubelet[2798]: E0916 05:03:29.697619 2798 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" already exists" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:29.701581 kubelet[2798]: I0916 05:03:29.700763 2798 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 16 05:03:29.701581 kubelet[2798]: I0916 05:03:29.700835 2798 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 16 05:03:29.701581 kubelet[2798]: E0916 05:03:29.700884 2798 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" already exists" pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:29.701581 kubelet[2798]: E0916 05:03:29.701146 2798 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" already exists" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:03:29.736231 kubelet[2798]: I0916 05:03:29.735854 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" podStartSLOduration=1.735829549 podStartE2EDuration="1.735829549s" podCreationTimestamp="2025-09-16 05:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:29.723331739 +0000 UTC m=+1.319967940" watchObservedRunningTime="2025-09-16 05:03:29.735829549 +0000 UTC m=+1.332465750" Sep 16 05:03:29.747066 kubelet[2798]: I0916 05:03:29.746812 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" podStartSLOduration=1.746795117 podStartE2EDuration="1.746795117s" podCreationTimestamp="2025-09-16 05:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:29.736888965 +0000 UTC m=+1.333525165" watchObservedRunningTime="2025-09-16 05:03:29.746795117 +0000 UTC m=+1.343431295" Sep 16 05:03:29.763506 kubelet[2798]: I0916 05:03:29.763429 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" podStartSLOduration=1.763409282 podStartE2EDuration="1.763409282s" podCreationTimestamp="2025-09-16 05:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:29.74922793 +0000 UTC m=+1.345864133" watchObservedRunningTime="2025-09-16 05:03:29.763409282 +0000 UTC m=+1.360045483" Sep 16 05:03:33.985074 kubelet[2798]: I0916 05:03:33.985018 2798 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 05:03:33.986173 containerd[1553]: time="2025-09-16T05:03:33.986131078Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 05:03:33.986916 kubelet[2798]: I0916 05:03:33.986570 2798 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 05:03:34.819215 systemd[1]: Created slice kubepods-besteffort-podbcd06acf_bc75_47f3_b20d_553585059e19.slice - libcontainer container kubepods-besteffort-podbcd06acf_bc75_47f3_b20d_553585059e19.slice. Sep 16 05:03:34.823848 kubelet[2798]: I0916 05:03:34.823384 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bcd06acf-bc75-47f3-b20d-553585059e19-kube-proxy\") pod \"kube-proxy-dxlkd\" (UID: \"bcd06acf-bc75-47f3-b20d-553585059e19\") " pod="kube-system/kube-proxy-dxlkd" Sep 16 05:03:34.824156 kubelet[2798]: I0916 05:03:34.824115 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcd06acf-bc75-47f3-b20d-553585059e19-xtables-lock\") pod \"kube-proxy-dxlkd\" (UID: \"bcd06acf-bc75-47f3-b20d-553585059e19\") " pod="kube-system/kube-proxy-dxlkd" Sep 16 05:03:34.824156 kubelet[2798]: I0916 05:03:34.824158 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcd06acf-bc75-47f3-b20d-553585059e19-lib-modules\") pod \"kube-proxy-dxlkd\" (UID: \"bcd06acf-bc75-47f3-b20d-553585059e19\") " pod="kube-system/kube-proxy-dxlkd" Sep 16 05:03:34.824156 kubelet[2798]: I0916 05:03:34.824199 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lrj2\" (UniqueName: \"kubernetes.io/projected/bcd06acf-bc75-47f3-b20d-553585059e19-kube-api-access-6lrj2\") pod \"kube-proxy-dxlkd\" (UID: \"bcd06acf-bc75-47f3-b20d-553585059e19\") " pod="kube-system/kube-proxy-dxlkd" Sep 16 05:03:34.933070 kubelet[2798]: E0916 05:03:34.933022 2798 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 16 05:03:34.933070 kubelet[2798]: E0916 05:03:34.933068 2798 projected.go:194] Error preparing data for projected volume kube-api-access-6lrj2 for pod kube-system/kube-proxy-dxlkd: configmap "kube-root-ca.crt" not found Sep 16 05:03:34.933313 kubelet[2798]: E0916 05:03:34.933162 2798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcd06acf-bc75-47f3-b20d-553585059e19-kube-api-access-6lrj2 podName:bcd06acf-bc75-47f3-b20d-553585059e19 nodeName:}" failed. No retries permitted until 2025-09-16 05:03:35.433134203 +0000 UTC m=+7.029770399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6lrj2" (UniqueName: "kubernetes.io/projected/bcd06acf-bc75-47f3-b20d-553585059e19-kube-api-access-6lrj2") pod "kube-proxy-dxlkd" (UID: "bcd06acf-bc75-47f3-b20d-553585059e19") : configmap "kube-root-ca.crt" not found Sep 16 05:03:35.253253 systemd[1]: Created slice kubepods-besteffort-pod14315b4f_574b_441e_add8_a0b707fa9ba4.slice - libcontainer container kubepods-besteffort-pod14315b4f_574b_441e_add8_a0b707fa9ba4.slice. Sep 16 05:03:35.327910 kubelet[2798]: I0916 05:03:35.327831 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/14315b4f-574b-441e-add8-a0b707fa9ba4-var-lib-calico\") pod \"tigera-operator-755d956888-vdm5w\" (UID: \"14315b4f-574b-441e-add8-a0b707fa9ba4\") " pod="tigera-operator/tigera-operator-755d956888-vdm5w" Sep 16 05:03:35.327910 kubelet[2798]: I0916 05:03:35.327918 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrxzl\" (UniqueName: \"kubernetes.io/projected/14315b4f-574b-441e-add8-a0b707fa9ba4-kube-api-access-hrxzl\") pod \"tigera-operator-755d956888-vdm5w\" (UID: \"14315b4f-574b-441e-add8-a0b707fa9ba4\") " pod="tigera-operator/tigera-operator-755d956888-vdm5w" Sep 16 05:03:35.561197 containerd[1553]: time="2025-09-16T05:03:35.561143149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-vdm5w,Uid:14315b4f-574b-441e-add8-a0b707fa9ba4,Namespace:tigera-operator,Attempt:0,}" Sep 16 05:03:35.596473 containerd[1553]: time="2025-09-16T05:03:35.596333842Z" level=info msg="connecting to shim e580a58146f0bd55eb562b1111ace3e3373e6e26587b92bf3dd2c8f6b0966c6b" address="unix:///run/containerd/s/e5da57448ef9bd0b9785911da2edfa6a77b1c73b1bec9fe4130bfea5277d25b8" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:35.638813 systemd[1]: Started cri-containerd-e580a58146f0bd55eb562b1111ace3e3373e6e26587b92bf3dd2c8f6b0966c6b.scope - libcontainer container e580a58146f0bd55eb562b1111ace3e3373e6e26587b92bf3dd2c8f6b0966c6b. Sep 16 05:03:35.701729 containerd[1553]: time="2025-09-16T05:03:35.701665160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-vdm5w,Uid:14315b4f-574b-441e-add8-a0b707fa9ba4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e580a58146f0bd55eb562b1111ace3e3373e6e26587b92bf3dd2c8f6b0966c6b\"" Sep 16 05:03:35.705236 containerd[1553]: time="2025-09-16T05:03:35.705106654Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 16 05:03:35.729273 containerd[1553]: time="2025-09-16T05:03:35.729227630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dxlkd,Uid:bcd06acf-bc75-47f3-b20d-553585059e19,Namespace:kube-system,Attempt:0,}" Sep 16 05:03:35.759363 containerd[1553]: time="2025-09-16T05:03:35.759265211Z" level=info msg="connecting to shim 36de42323c9963b979c44952a33840d36b1114ba67a97d824ce3749b7770c147" address="unix:///run/containerd/s/3ebb3e7a6f95b5511e334dbaf595ec1f17c0fdc49d563f0255b51c6bf47f6e6a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:35.794777 systemd[1]: Started cri-containerd-36de42323c9963b979c44952a33840d36b1114ba67a97d824ce3749b7770c147.scope - libcontainer container 36de42323c9963b979c44952a33840d36b1114ba67a97d824ce3749b7770c147. Sep 16 05:03:35.837911 containerd[1553]: time="2025-09-16T05:03:35.837695080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dxlkd,Uid:bcd06acf-bc75-47f3-b20d-553585059e19,Namespace:kube-system,Attempt:0,} returns sandbox id \"36de42323c9963b979c44952a33840d36b1114ba67a97d824ce3749b7770c147\"" Sep 16 05:03:35.846542 containerd[1553]: time="2025-09-16T05:03:35.846490080Z" level=info msg="CreateContainer within sandbox \"36de42323c9963b979c44952a33840d36b1114ba67a97d824ce3749b7770c147\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 05:03:35.858200 containerd[1553]: time="2025-09-16T05:03:35.858148314Z" level=info msg="Container 6c380bf566186fb399811c006a87ac2acc36fc11931261badba084a20987b0b3: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:35.872457 containerd[1553]: time="2025-09-16T05:03:35.872390196Z" level=info msg="CreateContainer within sandbox \"36de42323c9963b979c44952a33840d36b1114ba67a97d824ce3749b7770c147\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6c380bf566186fb399811c006a87ac2acc36fc11931261badba084a20987b0b3\"" Sep 16 05:03:35.874300 containerd[1553]: time="2025-09-16T05:03:35.874257335Z" level=info msg="StartContainer for \"6c380bf566186fb399811c006a87ac2acc36fc11931261badba084a20987b0b3\"" Sep 16 05:03:35.876714 containerd[1553]: time="2025-09-16T05:03:35.876671434Z" level=info msg="connecting to shim 6c380bf566186fb399811c006a87ac2acc36fc11931261badba084a20987b0b3" address="unix:///run/containerd/s/3ebb3e7a6f95b5511e334dbaf595ec1f17c0fdc49d563f0255b51c6bf47f6e6a" protocol=ttrpc version=3 Sep 16 05:03:35.905808 systemd[1]: Started cri-containerd-6c380bf566186fb399811c006a87ac2acc36fc11931261badba084a20987b0b3.scope - libcontainer container 6c380bf566186fb399811c006a87ac2acc36fc11931261badba084a20987b0b3. Sep 16 05:03:35.968321 containerd[1553]: time="2025-09-16T05:03:35.968266263Z" level=info msg="StartContainer for \"6c380bf566186fb399811c006a87ac2acc36fc11931261badba084a20987b0b3\" returns successfully" Sep 16 05:03:36.720599 kubelet[2798]: I0916 05:03:36.720282 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dxlkd" podStartSLOduration=2.720254857 podStartE2EDuration="2.720254857s" podCreationTimestamp="2025-09-16 05:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:03:36.71839222 +0000 UTC m=+8.315028423" watchObservedRunningTime="2025-09-16 05:03:36.720254857 +0000 UTC m=+8.316891057" Sep 16 05:03:36.760307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402898025.mount: Deactivated successfully. Sep 16 05:03:37.737001 containerd[1553]: time="2025-09-16T05:03:37.736921928Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:37.738489 containerd[1553]: time="2025-09-16T05:03:37.738195613Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 16 05:03:37.739919 containerd[1553]: time="2025-09-16T05:03:37.739870249Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:37.743665 containerd[1553]: time="2025-09-16T05:03:37.742912044Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:37.744072 containerd[1553]: time="2025-09-16T05:03:37.744026945Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.038831126s" Sep 16 05:03:37.744191 containerd[1553]: time="2025-09-16T05:03:37.744077811Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 16 05:03:37.750518 containerd[1553]: time="2025-09-16T05:03:37.750476671Z" level=info msg="CreateContainer within sandbox \"e580a58146f0bd55eb562b1111ace3e3373e6e26587b92bf3dd2c8f6b0966c6b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 16 05:03:37.764629 containerd[1553]: time="2025-09-16T05:03:37.760891849Z" level=info msg="Container e7b24f51e127078845cfc5546af5ce386f1190d77b02614a9fea788ab5015b35: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:37.775521 containerd[1553]: time="2025-09-16T05:03:37.775446426Z" level=info msg="CreateContainer within sandbox \"e580a58146f0bd55eb562b1111ace3e3373e6e26587b92bf3dd2c8f6b0966c6b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e7b24f51e127078845cfc5546af5ce386f1190d77b02614a9fea788ab5015b35\"" Sep 16 05:03:37.776602 containerd[1553]: time="2025-09-16T05:03:37.776438831Z" level=info msg="StartContainer for \"e7b24f51e127078845cfc5546af5ce386f1190d77b02614a9fea788ab5015b35\"" Sep 16 05:03:37.778764 containerd[1553]: time="2025-09-16T05:03:37.778722073Z" level=info msg="connecting to shim e7b24f51e127078845cfc5546af5ce386f1190d77b02614a9fea788ab5015b35" address="unix:///run/containerd/s/e5da57448ef9bd0b9785911da2edfa6a77b1c73b1bec9fe4130bfea5277d25b8" protocol=ttrpc version=3 Sep 16 05:03:37.813860 systemd[1]: Started cri-containerd-e7b24f51e127078845cfc5546af5ce386f1190d77b02614a9fea788ab5015b35.scope - libcontainer container e7b24f51e127078845cfc5546af5ce386f1190d77b02614a9fea788ab5015b35. Sep 16 05:03:37.857701 containerd[1553]: time="2025-09-16T05:03:37.857619599Z" level=info msg="StartContainer for \"e7b24f51e127078845cfc5546af5ce386f1190d77b02614a9fea788ab5015b35\" returns successfully" Sep 16 05:03:39.810537 kubelet[2798]: I0916 05:03:39.810422 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-vdm5w" podStartSLOduration=2.768750341 podStartE2EDuration="4.810398622s" podCreationTimestamp="2025-09-16 05:03:35 +0000 UTC" firstStartedPulling="2025-09-16 05:03:35.703997368 +0000 UTC m=+7.300633542" lastFinishedPulling="2025-09-16 05:03:37.745645629 +0000 UTC m=+9.342281823" observedRunningTime="2025-09-16 05:03:38.728446231 +0000 UTC m=+10.325082437" watchObservedRunningTime="2025-09-16 05:03:39.810398622 +0000 UTC m=+11.407034821" Sep 16 05:03:45.094922 sudo[1876]: pam_unix(sudo:session): session closed for user root Sep 16 05:03:45.141877 sshd[1875]: Connection closed by 139.178.68.195 port 41492 Sep 16 05:03:45.141723 sshd-session[1872]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:45.152697 systemd-logind[1537]: Session 9 logged out. Waiting for processes to exit. Sep 16 05:03:45.154897 systemd[1]: sshd@8-10.128.0.94:22-139.178.68.195:41492.service: Deactivated successfully. Sep 16 05:03:45.164443 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 05:03:45.166408 systemd[1]: session-9.scope: Consumed 5.975s CPU time, 234.4M memory peak. Sep 16 05:03:45.173065 systemd-logind[1537]: Removed session 9. Sep 16 05:03:50.486981 systemd[1]: Created slice kubepods-besteffort-pod34083dcf_f155_4f5a_987c_1594d5be1687.slice - libcontainer container kubepods-besteffort-pod34083dcf_f155_4f5a_987c_1594d5be1687.slice. Sep 16 05:03:50.534150 kubelet[2798]: I0916 05:03:50.534089 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/34083dcf-f155-4f5a-987c-1594d5be1687-typha-certs\") pod \"calico-typha-7678954687-w8cxh\" (UID: \"34083dcf-f155-4f5a-987c-1594d5be1687\") " pod="calico-system/calico-typha-7678954687-w8cxh" Sep 16 05:03:50.534751 kubelet[2798]: I0916 05:03:50.534168 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34083dcf-f155-4f5a-987c-1594d5be1687-tigera-ca-bundle\") pod \"calico-typha-7678954687-w8cxh\" (UID: \"34083dcf-f155-4f5a-987c-1594d5be1687\") " pod="calico-system/calico-typha-7678954687-w8cxh" Sep 16 05:03:50.534751 kubelet[2798]: I0916 05:03:50.534198 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdjcc\" (UniqueName: \"kubernetes.io/projected/34083dcf-f155-4f5a-987c-1594d5be1687-kube-api-access-rdjcc\") pod \"calico-typha-7678954687-w8cxh\" (UID: \"34083dcf-f155-4f5a-987c-1594d5be1687\") " pod="calico-system/calico-typha-7678954687-w8cxh" Sep 16 05:03:50.747796 systemd[1]: Created slice kubepods-besteffort-podf58723b1_8033_412f_af60_dae555a2349d.slice - libcontainer container kubepods-besteffort-podf58723b1_8033_412f_af60_dae555a2349d.slice. Sep 16 05:03:50.796071 containerd[1553]: time="2025-09-16T05:03:50.795978577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7678954687-w8cxh,Uid:34083dcf-f155-4f5a-987c-1594d5be1687,Namespace:calico-system,Attempt:0,}" Sep 16 05:03:50.838542 kubelet[2798]: I0916 05:03:50.837942 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f58723b1-8033-412f-af60-dae555a2349d-node-certs\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.838542 kubelet[2798]: I0916 05:03:50.838015 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f58723b1-8033-412f-af60-dae555a2349d-tigera-ca-bundle\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.838542 kubelet[2798]: I0916 05:03:50.838049 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f58723b1-8033-412f-af60-dae555a2349d-flexvol-driver-host\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.838542 kubelet[2798]: I0916 05:03:50.838088 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f58723b1-8033-412f-af60-dae555a2349d-xtables-lock\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.838542 kubelet[2798]: I0916 05:03:50.838116 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f58723b1-8033-412f-af60-dae555a2349d-lib-modules\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.838984 kubelet[2798]: I0916 05:03:50.838143 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f58723b1-8033-412f-af60-dae555a2349d-var-lib-calico\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.838984 kubelet[2798]: I0916 05:03:50.838171 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f58723b1-8033-412f-af60-dae555a2349d-cni-bin-dir\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.838984 kubelet[2798]: I0916 05:03:50.838201 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfq2r\" (UniqueName: \"kubernetes.io/projected/f58723b1-8033-412f-af60-dae555a2349d-kube-api-access-dfq2r\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.838984 kubelet[2798]: I0916 05:03:50.838231 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f58723b1-8033-412f-af60-dae555a2349d-policysync\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.841471 kubelet[2798]: I0916 05:03:50.841312 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f58723b1-8033-412f-af60-dae555a2349d-cni-log-dir\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.842301 kubelet[2798]: I0916 05:03:50.842020 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f58723b1-8033-412f-af60-dae555a2349d-var-run-calico\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.842397 kubelet[2798]: I0916 05:03:50.842370 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f58723b1-8033-412f-af60-dae555a2349d-cni-net-dir\") pod \"calico-node-c5w9x\" (UID: \"f58723b1-8033-412f-af60-dae555a2349d\") " pod="calico-system/calico-node-c5w9x" Sep 16 05:03:50.852318 containerd[1553]: time="2025-09-16T05:03:50.852242062Z" level=info msg="connecting to shim 3421ba952982edff0e8b3568c124b0b8d2085a8c1ef9a0806975800baf8317e3" address="unix:///run/containerd/s/bbaffd8b2287445ebd9112f1174258af2326305bb17eba2ccc9b5502d8d85d21" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:50.933617 systemd[1]: Started cri-containerd-3421ba952982edff0e8b3568c124b0b8d2085a8c1ef9a0806975800baf8317e3.scope - libcontainer container 3421ba952982edff0e8b3568c124b0b8d2085a8c1ef9a0806975800baf8317e3. Sep 16 05:03:50.954320 kubelet[2798]: E0916 05:03:50.954225 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:50.954320 kubelet[2798]: W0916 05:03:50.954257 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:50.955450 kubelet[2798]: E0916 05:03:50.954445 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:50.966209 kubelet[2798]: E0916 05:03:50.966150 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xxzmm" podUID="d62b9cd3-4af7-4636-9ff3-4ff29ca03a41" Sep 16 05:03:50.970017 kubelet[2798]: E0916 05:03:50.969957 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:50.970756 kubelet[2798]: W0916 05:03:50.970432 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:50.970756 kubelet[2798]: E0916 05:03:50.970475 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.004020 kubelet[2798]: E0916 05:03:51.003900 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.004020 kubelet[2798]: W0916 05:03:51.003939 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.004020 kubelet[2798]: E0916 05:03:51.003967 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.022388 kubelet[2798]: E0916 05:03:51.022244 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.022388 kubelet[2798]: W0916 05:03:51.022339 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.022388 kubelet[2798]: E0916 05:03:51.022371 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.023237 kubelet[2798]: E0916 05:03:51.023113 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.023237 kubelet[2798]: W0916 05:03:51.023135 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.023237 kubelet[2798]: E0916 05:03:51.023156 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.023713 kubelet[2798]: E0916 05:03:51.023696 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.023713 kubelet[2798]: W0916 05:03:51.023713 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.024166 kubelet[2798]: E0916 05:03:51.023731 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.025004 kubelet[2798]: E0916 05:03:51.024955 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.025004 kubelet[2798]: W0916 05:03:51.024975 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.025156 kubelet[2798]: E0916 05:03:51.024994 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.025452 kubelet[2798]: E0916 05:03:51.025427 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.025452 kubelet[2798]: W0916 05:03:51.025453 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.025645 kubelet[2798]: E0916 05:03:51.025474 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.026724 kubelet[2798]: E0916 05:03:51.026697 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.026724 kubelet[2798]: W0916 05:03:51.026722 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.026878 kubelet[2798]: E0916 05:03:51.026740 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.027136 kubelet[2798]: E0916 05:03:51.027109 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.027136 kubelet[2798]: W0916 05:03:51.027133 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.027280 kubelet[2798]: E0916 05:03:51.027151 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.027492 kubelet[2798]: E0916 05:03:51.027467 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.027492 kubelet[2798]: W0916 05:03:51.027489 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.028190 kubelet[2798]: E0916 05:03:51.027509 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.028190 kubelet[2798]: E0916 05:03:51.027875 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.028190 kubelet[2798]: W0916 05:03:51.027889 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.028190 kubelet[2798]: E0916 05:03:51.027905 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.029105 kubelet[2798]: E0916 05:03:51.029075 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.029105 kubelet[2798]: W0916 05:03:51.029101 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.029256 kubelet[2798]: E0916 05:03:51.029119 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.029615 kubelet[2798]: E0916 05:03:51.029431 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.029615 kubelet[2798]: W0916 05:03:51.029528 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.029615 kubelet[2798]: E0916 05:03:51.029608 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.030230 kubelet[2798]: E0916 05:03:51.029934 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.030230 kubelet[2798]: W0916 05:03:51.029953 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.030230 kubelet[2798]: E0916 05:03:51.029969 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.030466 kubelet[2798]: E0916 05:03:51.030444 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.030466 kubelet[2798]: W0916 05:03:51.030466 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.030648 kubelet[2798]: E0916 05:03:51.030483 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.031217 kubelet[2798]: E0916 05:03:51.031189 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.031217 kubelet[2798]: W0916 05:03:51.031214 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.031366 kubelet[2798]: E0916 05:03:51.031231 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.032051 kubelet[2798]: E0916 05:03:51.032023 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.032051 kubelet[2798]: W0916 05:03:51.032048 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.032272 kubelet[2798]: E0916 05:03:51.032065 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.032767 kubelet[2798]: E0916 05:03:51.032741 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.032767 kubelet[2798]: W0916 05:03:51.032765 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.032891 kubelet[2798]: E0916 05:03:51.032783 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.033521 kubelet[2798]: E0916 05:03:51.033494 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.033653 kubelet[2798]: W0916 05:03:51.033519 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.033653 kubelet[2798]: E0916 05:03:51.033593 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.034569 kubelet[2798]: E0916 05:03:51.034525 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.034710 kubelet[2798]: W0916 05:03:51.034641 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.034710 kubelet[2798]: E0916 05:03:51.034664 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.034994 kubelet[2798]: E0916 05:03:51.034969 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.034994 kubelet[2798]: W0916 05:03:51.034992 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.035123 kubelet[2798]: E0916 05:03:51.035009 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.035885 kubelet[2798]: E0916 05:03:51.035758 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.035885 kubelet[2798]: W0916 05:03:51.035779 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.035885 kubelet[2798]: E0916 05:03:51.035797 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.046169 kubelet[2798]: E0916 05:03:51.046137 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.046169 kubelet[2798]: W0916 05:03:51.046167 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.046414 kubelet[2798]: E0916 05:03:51.046192 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.046414 kubelet[2798]: I0916 05:03:51.046229 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d62b9cd3-4af7-4636-9ff3-4ff29ca03a41-kubelet-dir\") pod \"csi-node-driver-xxzmm\" (UID: \"d62b9cd3-4af7-4636-9ff3-4ff29ca03a41\") " pod="calico-system/csi-node-driver-xxzmm" Sep 16 05:03:51.048033 kubelet[2798]: E0916 05:03:51.046549 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.048033 kubelet[2798]: W0916 05:03:51.047924 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.048033 kubelet[2798]: E0916 05:03:51.047950 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.048033 kubelet[2798]: I0916 05:03:51.048004 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d62b9cd3-4af7-4636-9ff3-4ff29ca03a41-socket-dir\") pod \"csi-node-driver-xxzmm\" (UID: \"d62b9cd3-4af7-4636-9ff3-4ff29ca03a41\") " pod="calico-system/csi-node-driver-xxzmm" Sep 16 05:03:51.049573 kubelet[2798]: E0916 05:03:51.048342 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.049573 kubelet[2798]: W0916 05:03:51.048360 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.049573 kubelet[2798]: E0916 05:03:51.048378 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.049780 kubelet[2798]: E0916 05:03:51.049595 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.049780 kubelet[2798]: W0916 05:03:51.049611 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.049780 kubelet[2798]: E0916 05:03:51.049629 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.050757 kubelet[2798]: E0916 05:03:51.050023 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.050757 kubelet[2798]: W0916 05:03:51.050039 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.050757 kubelet[2798]: E0916 05:03:51.050056 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.050757 kubelet[2798]: E0916 05:03:51.050636 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.050757 kubelet[2798]: W0916 05:03:51.050652 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.051049 kubelet[2798]: E0916 05:03:51.050777 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.052019 kubelet[2798]: E0916 05:03:51.051325 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.052019 kubelet[2798]: W0916 05:03:51.051344 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.052019 kubelet[2798]: E0916 05:03:51.051468 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.052019 kubelet[2798]: I0916 05:03:51.051521 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d62b9cd3-4af7-4636-9ff3-4ff29ca03a41-varrun\") pod \"csi-node-driver-xxzmm\" (UID: \"d62b9cd3-4af7-4636-9ff3-4ff29ca03a41\") " pod="calico-system/csi-node-driver-xxzmm" Sep 16 05:03:51.052738 kubelet[2798]: E0916 05:03:51.052195 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.052738 kubelet[2798]: W0916 05:03:51.052212 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.052738 kubelet[2798]: E0916 05:03:51.052314 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.053670 kubelet[2798]: I0916 05:03:51.053632 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dggm\" (UniqueName: \"kubernetes.io/projected/d62b9cd3-4af7-4636-9ff3-4ff29ca03a41-kube-api-access-4dggm\") pod \"csi-node-driver-xxzmm\" (UID: \"d62b9cd3-4af7-4636-9ff3-4ff29ca03a41\") " pod="calico-system/csi-node-driver-xxzmm" Sep 16 05:03:51.053996 kubelet[2798]: E0916 05:03:51.053961 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.053996 kubelet[2798]: W0916 05:03:51.053983 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.054132 kubelet[2798]: E0916 05:03:51.054001 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.054132 kubelet[2798]: I0916 05:03:51.054025 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d62b9cd3-4af7-4636-9ff3-4ff29ca03a41-registration-dir\") pod \"csi-node-driver-xxzmm\" (UID: \"d62b9cd3-4af7-4636-9ff3-4ff29ca03a41\") " pod="calico-system/csi-node-driver-xxzmm" Sep 16 05:03:51.055904 containerd[1553]: time="2025-09-16T05:03:51.055858020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c5w9x,Uid:f58723b1-8033-412f-af60-dae555a2349d,Namespace:calico-system,Attempt:0,}" Sep 16 05:03:51.056331 kubelet[2798]: E0916 05:03:51.056284 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.056331 kubelet[2798]: W0916 05:03:51.056311 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.056331 kubelet[2798]: E0916 05:03:51.056331 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.056853 kubelet[2798]: E0916 05:03:51.056680 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.056853 kubelet[2798]: W0916 05:03:51.056698 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.056853 kubelet[2798]: E0916 05:03:51.056725 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.057594 kubelet[2798]: E0916 05:03:51.057362 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.057594 kubelet[2798]: W0916 05:03:51.057377 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.057594 kubelet[2798]: E0916 05:03:51.057393 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.058308 kubelet[2798]: E0916 05:03:51.057757 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.058308 kubelet[2798]: W0916 05:03:51.057789 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.058308 kubelet[2798]: E0916 05:03:51.057805 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.059401 kubelet[2798]: E0916 05:03:51.059365 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.059401 kubelet[2798]: W0916 05:03:51.059390 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.060073 kubelet[2798]: E0916 05:03:51.059409 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.060206 kubelet[2798]: E0916 05:03:51.060182 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.060206 kubelet[2798]: W0916 05:03:51.060197 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.060206 kubelet[2798]: E0916 05:03:51.060214 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.093215 containerd[1553]: time="2025-09-16T05:03:51.093152600Z" level=info msg="connecting to shim 9ea35bc031173180c1c6baf3b506180b9a865ffbeea826be68cfa33af56d849c" address="unix:///run/containerd/s/1b52a3de9859f15adb45f10430c1557a8bd48c168f92e8435a9fff56cd7b5f31" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:03:51.152892 systemd[1]: Started cri-containerd-9ea35bc031173180c1c6baf3b506180b9a865ffbeea826be68cfa33af56d849c.scope - libcontainer container 9ea35bc031173180c1c6baf3b506180b9a865ffbeea826be68cfa33af56d849c. Sep 16 05:03:51.155851 kubelet[2798]: E0916 05:03:51.155807 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.155851 kubelet[2798]: W0916 05:03:51.155841 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.156119 kubelet[2798]: E0916 05:03:51.155891 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.158630 kubelet[2798]: E0916 05:03:51.158585 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.159183 kubelet[2798]: W0916 05:03:51.158913 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.159183 kubelet[2798]: E0916 05:03:51.158971 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.160448 kubelet[2798]: E0916 05:03:51.160380 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.160692 kubelet[2798]: W0916 05:03:51.160591 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.160692 kubelet[2798]: E0916 05:03:51.160619 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.161958 kubelet[2798]: E0916 05:03:51.161836 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.161958 kubelet[2798]: W0916 05:03:51.161889 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.161958 kubelet[2798]: E0916 05:03:51.161909 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.162949 kubelet[2798]: E0916 05:03:51.162879 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.162949 kubelet[2798]: W0916 05:03:51.162897 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.162949 kubelet[2798]: E0916 05:03:51.162914 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.164032 kubelet[2798]: E0916 05:03:51.163949 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.164032 kubelet[2798]: W0916 05:03:51.163970 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.164032 kubelet[2798]: E0916 05:03:51.163989 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.164977 kubelet[2798]: E0916 05:03:51.164917 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.164977 kubelet[2798]: W0916 05:03:51.164937 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.164977 kubelet[2798]: E0916 05:03:51.164956 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.165995 kubelet[2798]: E0916 05:03:51.165909 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.165995 kubelet[2798]: W0916 05:03:51.165958 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.165995 kubelet[2798]: E0916 05:03:51.165975 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.166778 kubelet[2798]: E0916 05:03:51.166653 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.167114 kubelet[2798]: W0916 05:03:51.166894 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.167114 kubelet[2798]: E0916 05:03:51.166920 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.167753 kubelet[2798]: E0916 05:03:51.167631 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.168004 kubelet[2798]: W0916 05:03:51.167869 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.168004 kubelet[2798]: E0916 05:03:51.167899 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.168725 kubelet[2798]: E0916 05:03:51.168668 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.168725 kubelet[2798]: W0916 05:03:51.168686 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.168725 kubelet[2798]: E0916 05:03:51.168705 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.169814 kubelet[2798]: E0916 05:03:51.169785 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.170034 kubelet[2798]: W0916 05:03:51.169923 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.170034 kubelet[2798]: E0916 05:03:51.169947 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.170533 kubelet[2798]: E0916 05:03:51.170479 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.170533 kubelet[2798]: W0916 05:03:51.170496 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.170533 kubelet[2798]: E0916 05:03:51.170513 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.171137 kubelet[2798]: E0916 05:03:51.171042 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.171362 kubelet[2798]: W0916 05:03:51.171243 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.171362 kubelet[2798]: E0916 05:03:51.171272 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.171986 kubelet[2798]: E0916 05:03:51.171931 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.171986 kubelet[2798]: W0916 05:03:51.171950 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.171986 kubelet[2798]: E0916 05:03:51.171967 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.172786 kubelet[2798]: E0916 05:03:51.172762 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.173127 kubelet[2798]: W0916 05:03:51.172937 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.173127 kubelet[2798]: E0916 05:03:51.172998 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.173864 kubelet[2798]: E0916 05:03:51.173819 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.174035 kubelet[2798]: W0916 05:03:51.173967 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.174193 kubelet[2798]: E0916 05:03:51.174135 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.174920 kubelet[2798]: E0916 05:03:51.174890 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.175356 kubelet[2798]: W0916 05:03:51.175093 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.175356 kubelet[2798]: E0916 05:03:51.175118 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.175988 kubelet[2798]: E0916 05:03:51.175954 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.176296 kubelet[2798]: W0916 05:03:51.176182 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.176296 kubelet[2798]: E0916 05:03:51.176208 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.177450 kubelet[2798]: E0916 05:03:51.177325 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.177745 kubelet[2798]: W0916 05:03:51.177579 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.177745 kubelet[2798]: E0916 05:03:51.177624 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.178314 kubelet[2798]: E0916 05:03:51.178201 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.178587 kubelet[2798]: W0916 05:03:51.178412 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.178587 kubelet[2798]: E0916 05:03:51.178447 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.179395 kubelet[2798]: E0916 05:03:51.179325 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.179395 kubelet[2798]: W0916 05:03:51.179345 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.179395 kubelet[2798]: E0916 05:03:51.179362 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.181580 kubelet[2798]: E0916 05:03:51.180446 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.181580 kubelet[2798]: W0916 05:03:51.180481 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.181580 kubelet[2798]: E0916 05:03:51.180500 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.182360 kubelet[2798]: E0916 05:03:51.182303 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.182360 kubelet[2798]: W0916 05:03:51.182323 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.182360 kubelet[2798]: E0916 05:03:51.182341 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.183085 kubelet[2798]: E0916 05:03:51.183015 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.183085 kubelet[2798]: W0916 05:03:51.183034 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.183085 kubelet[2798]: E0916 05:03:51.183051 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.206629 kubelet[2798]: E0916 05:03:51.206461 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:51.207690 kubelet[2798]: W0916 05:03:51.206550 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:51.207690 kubelet[2798]: E0916 05:03:51.207506 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:51.290993 containerd[1553]: time="2025-09-16T05:03:51.290522695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7678954687-w8cxh,Uid:34083dcf-f155-4f5a-987c-1594d5be1687,Namespace:calico-system,Attempt:0,} returns sandbox id \"3421ba952982edff0e8b3568c124b0b8d2085a8c1ef9a0806975800baf8317e3\"" Sep 16 05:03:51.299298 containerd[1553]: time="2025-09-16T05:03:51.297539285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c5w9x,Uid:f58723b1-8033-412f-af60-dae555a2349d,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ea35bc031173180c1c6baf3b506180b9a865ffbeea826be68cfa33af56d849c\"" Sep 16 05:03:51.303017 containerd[1553]: time="2025-09-16T05:03:51.302348300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 16 05:03:52.256208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount438350363.mount: Deactivated successfully. Sep 16 05:03:52.648736 kubelet[2798]: E0916 05:03:52.647607 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xxzmm" podUID="d62b9cd3-4af7-4636-9ff3-4ff29ca03a41" Sep 16 05:03:53.557032 containerd[1553]: time="2025-09-16T05:03:53.556965719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:53.558367 containerd[1553]: time="2025-09-16T05:03:53.558129337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 16 05:03:53.559350 containerd[1553]: time="2025-09-16T05:03:53.559306820Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:53.562004 containerd[1553]: time="2025-09-16T05:03:53.561963083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:53.562953 containerd[1553]: time="2025-09-16T05:03:53.562915218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.255442474s" Sep 16 05:03:53.563099 containerd[1553]: time="2025-09-16T05:03:53.563073770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 16 05:03:53.564853 containerd[1553]: time="2025-09-16T05:03:53.564823025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 16 05:03:53.598278 containerd[1553]: time="2025-09-16T05:03:53.598199672Z" level=info msg="CreateContainer within sandbox \"3421ba952982edff0e8b3568c124b0b8d2085a8c1ef9a0806975800baf8317e3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 16 05:03:53.609208 containerd[1553]: time="2025-09-16T05:03:53.607796037Z" level=info msg="Container 85a3dab5924d195cf5e0cae86b02aff8521133958eed408b57574bfbae959b74: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:53.620088 containerd[1553]: time="2025-09-16T05:03:53.620053527Z" level=info msg="CreateContainer within sandbox \"3421ba952982edff0e8b3568c124b0b8d2085a8c1ef9a0806975800baf8317e3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"85a3dab5924d195cf5e0cae86b02aff8521133958eed408b57574bfbae959b74\"" Sep 16 05:03:53.622191 containerd[1553]: time="2025-09-16T05:03:53.622156707Z" level=info msg="StartContainer for \"85a3dab5924d195cf5e0cae86b02aff8521133958eed408b57574bfbae959b74\"" Sep 16 05:03:53.624892 containerd[1553]: time="2025-09-16T05:03:53.624855175Z" level=info msg="connecting to shim 85a3dab5924d195cf5e0cae86b02aff8521133958eed408b57574bfbae959b74" address="unix:///run/containerd/s/bbaffd8b2287445ebd9112f1174258af2326305bb17eba2ccc9b5502d8d85d21" protocol=ttrpc version=3 Sep 16 05:03:53.673176 systemd[1]: Started cri-containerd-85a3dab5924d195cf5e0cae86b02aff8521133958eed408b57574bfbae959b74.scope - libcontainer container 85a3dab5924d195cf5e0cae86b02aff8521133958eed408b57574bfbae959b74. Sep 16 05:03:53.771727 containerd[1553]: time="2025-09-16T05:03:53.771634309Z" level=info msg="StartContainer for \"85a3dab5924d195cf5e0cae86b02aff8521133958eed408b57574bfbae959b74\" returns successfully" Sep 16 05:03:53.802657 kubelet[2798]: I0916 05:03:53.802067 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7678954687-w8cxh" podStartSLOduration=1.539327968 podStartE2EDuration="3.802041659s" podCreationTimestamp="2025-09-16 05:03:50 +0000 UTC" firstStartedPulling="2025-09-16 05:03:51.301515775 +0000 UTC m=+22.898151962" lastFinishedPulling="2025-09-16 05:03:53.564229473 +0000 UTC m=+25.160865653" observedRunningTime="2025-09-16 05:03:53.801219418 +0000 UTC m=+25.397855618" watchObservedRunningTime="2025-09-16 05:03:53.802041659 +0000 UTC m=+25.398677860" Sep 16 05:03:53.856787 kubelet[2798]: E0916 05:03:53.856033 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.856787 kubelet[2798]: W0916 05:03:53.856264 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.856787 kubelet[2798]: E0916 05:03:53.856312 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.857619 kubelet[2798]: E0916 05:03:53.857596 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.857619 kubelet[2798]: W0916 05:03:53.857618 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.857782 kubelet[2798]: E0916 05:03:53.857643 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.858319 kubelet[2798]: E0916 05:03:53.858286 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.858319 kubelet[2798]: W0916 05:03:53.858309 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.858474 kubelet[2798]: E0916 05:03:53.858328 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.858837 kubelet[2798]: E0916 05:03:53.858793 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.858837 kubelet[2798]: W0916 05:03:53.858813 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.858837 kubelet[2798]: E0916 05:03:53.858832 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.859384 kubelet[2798]: E0916 05:03:53.859348 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.859384 kubelet[2798]: W0916 05:03:53.859367 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.859384 kubelet[2798]: E0916 05:03:53.859385 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.859741 kubelet[2798]: E0916 05:03:53.859720 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.859741 kubelet[2798]: W0916 05:03:53.859738 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.860024 kubelet[2798]: E0916 05:03:53.859755 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.860091 kubelet[2798]: E0916 05:03:53.860056 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.860091 kubelet[2798]: W0916 05:03:53.860070 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.860091 kubelet[2798]: E0916 05:03:53.860086 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.860518 kubelet[2798]: E0916 05:03:53.860475 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.860518 kubelet[2798]: W0916 05:03:53.860514 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.860698 kubelet[2798]: E0916 05:03:53.860532 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.861715 kubelet[2798]: E0916 05:03:53.861689 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.861715 kubelet[2798]: W0916 05:03:53.861712 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.862009 kubelet[2798]: E0916 05:03:53.861729 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.862172 kubelet[2798]: E0916 05:03:53.862149 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.862172 kubelet[2798]: W0916 05:03:53.862171 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.862345 kubelet[2798]: E0916 05:03:53.862187 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.862852 kubelet[2798]: E0916 05:03:53.862825 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.862852 kubelet[2798]: W0916 05:03:53.862848 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.863013 kubelet[2798]: E0916 05:03:53.862866 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.863892 kubelet[2798]: E0916 05:03:53.863868 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.863892 kubelet[2798]: W0916 05:03:53.863890 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.864060 kubelet[2798]: E0916 05:03:53.863907 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.864428 kubelet[2798]: E0916 05:03:53.864401 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.864599 kubelet[2798]: W0916 05:03:53.864541 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.864692 kubelet[2798]: E0916 05:03:53.864609 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.865283 kubelet[2798]: E0916 05:03:53.865243 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.865283 kubelet[2798]: W0916 05:03:53.865267 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.865425 kubelet[2798]: E0916 05:03:53.865293 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.866158 kubelet[2798]: E0916 05:03:53.866130 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.866158 kubelet[2798]: W0916 05:03:53.866156 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.866327 kubelet[2798]: E0916 05:03:53.866174 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.888692 kubelet[2798]: E0916 05:03:53.888651 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.888692 kubelet[2798]: W0916 05:03:53.888687 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.888916 kubelet[2798]: E0916 05:03:53.888712 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.889873 kubelet[2798]: E0916 05:03:53.889841 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.889873 kubelet[2798]: W0916 05:03:53.889869 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.890059 kubelet[2798]: E0916 05:03:53.889890 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.890888 kubelet[2798]: E0916 05:03:53.890858 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.890888 kubelet[2798]: W0916 05:03:53.890884 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.891039 kubelet[2798]: E0916 05:03:53.890905 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.892911 kubelet[2798]: E0916 05:03:53.892877 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.892911 kubelet[2798]: W0916 05:03:53.892907 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.893083 kubelet[2798]: E0916 05:03:53.892928 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.893327 kubelet[2798]: E0916 05:03:53.893303 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.893327 kubelet[2798]: W0916 05:03:53.893325 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.893451 kubelet[2798]: E0916 05:03:53.893342 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.893716 kubelet[2798]: E0916 05:03:53.893693 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.893716 kubelet[2798]: W0916 05:03:53.893714 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.893863 kubelet[2798]: E0916 05:03:53.893730 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.894079 kubelet[2798]: E0916 05:03:53.894057 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.894079 kubelet[2798]: W0916 05:03:53.894078 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.894204 kubelet[2798]: E0916 05:03:53.894094 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.894442 kubelet[2798]: E0916 05:03:53.894417 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.894442 kubelet[2798]: W0916 05:03:53.894440 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.894605 kubelet[2798]: E0916 05:03:53.894457 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.896875 kubelet[2798]: E0916 05:03:53.896844 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.896982 kubelet[2798]: W0916 05:03:53.896888 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.896982 kubelet[2798]: E0916 05:03:53.896908 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.897680 kubelet[2798]: E0916 05:03:53.897530 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.897680 kubelet[2798]: W0916 05:03:53.897592 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.897680 kubelet[2798]: E0916 05:03:53.897611 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.897999 kubelet[2798]: E0916 05:03:53.897978 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.897999 kubelet[2798]: W0916 05:03:53.897999 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.898131 kubelet[2798]: E0916 05:03:53.898015 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.898421 kubelet[2798]: E0916 05:03:53.898397 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.898421 kubelet[2798]: W0916 05:03:53.898419 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.898584 kubelet[2798]: E0916 05:03:53.898436 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.900807 kubelet[2798]: E0916 05:03:53.900780 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.900807 kubelet[2798]: W0916 05:03:53.900804 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.900955 kubelet[2798]: E0916 05:03:53.900823 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.901227 kubelet[2798]: E0916 05:03:53.901203 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.901227 kubelet[2798]: W0916 05:03:53.901226 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.901373 kubelet[2798]: E0916 05:03:53.901243 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.904145 kubelet[2798]: E0916 05:03:53.904076 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.904145 kubelet[2798]: W0916 05:03:53.904104 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.904145 kubelet[2798]: E0916 05:03:53.904122 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.905211 kubelet[2798]: E0916 05:03:53.904516 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.905211 kubelet[2798]: W0916 05:03:53.904534 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.905211 kubelet[2798]: E0916 05:03:53.904551 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.905428 kubelet[2798]: E0916 05:03:53.905354 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.905428 kubelet[2798]: W0916 05:03:53.905370 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.905428 kubelet[2798]: E0916 05:03:53.905387 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:53.906672 kubelet[2798]: E0916 05:03:53.905854 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 05:03:53.906672 kubelet[2798]: W0916 05:03:53.905869 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 05:03:53.906672 kubelet[2798]: E0916 05:03:53.905885 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 05:03:54.535332 containerd[1553]: time="2025-09-16T05:03:54.535246245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:54.537372 containerd[1553]: time="2025-09-16T05:03:54.537307738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 16 05:03:54.538132 containerd[1553]: time="2025-09-16T05:03:54.538058252Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:54.547339 containerd[1553]: time="2025-09-16T05:03:54.547252339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:03:54.548582 containerd[1553]: time="2025-09-16T05:03:54.548389364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 983.38389ms" Sep 16 05:03:54.548582 containerd[1553]: time="2025-09-16T05:03:54.548439036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 16 05:03:54.554243 containerd[1553]: time="2025-09-16T05:03:54.554175830Z" level=info msg="CreateContainer within sandbox \"9ea35bc031173180c1c6baf3b506180b9a865ffbeea826be68cfa33af56d849c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 16 05:03:54.567792 containerd[1553]: time="2025-09-16T05:03:54.567691423Z" level=info msg="Container 3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:54.590859 containerd[1553]: time="2025-09-16T05:03:54.590776544Z" level=info msg="CreateContainer within sandbox \"9ea35bc031173180c1c6baf3b506180b9a865ffbeea826be68cfa33af56d849c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c\"" Sep 16 05:03:54.593422 containerd[1553]: time="2025-09-16T05:03:54.592662974Z" level=info msg="StartContainer for \"3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c\"" Sep 16 05:03:54.597583 containerd[1553]: time="2025-09-16T05:03:54.597300018Z" level=info msg="connecting to shim 3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c" address="unix:///run/containerd/s/1b52a3de9859f15adb45f10430c1557a8bd48c168f92e8435a9fff56cd7b5f31" protocol=ttrpc version=3 Sep 16 05:03:54.633789 systemd[1]: Started cri-containerd-3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c.scope - libcontainer container 3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c. Sep 16 05:03:54.646876 kubelet[2798]: E0916 05:03:54.643804 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xxzmm" podUID="d62b9cd3-4af7-4636-9ff3-4ff29ca03a41" Sep 16 05:03:54.724015 containerd[1553]: time="2025-09-16T05:03:54.723964613Z" level=info msg="StartContainer for \"3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c\" returns successfully" Sep 16 05:03:54.745125 systemd[1]: cri-containerd-3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c.scope: Deactivated successfully. Sep 16 05:03:54.755595 containerd[1553]: time="2025-09-16T05:03:54.755485930Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c\" id:\"3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c\" pid:3500 exited_at:{seconds:1757999034 nanos:754848699}" Sep 16 05:03:54.755826 containerd[1553]: time="2025-09-16T05:03:54.755788542Z" level=info msg="received exit event container_id:\"3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c\" id:\"3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c\" pid:3500 exited_at:{seconds:1757999034 nanos:754848699}" Sep 16 05:03:54.796615 kubelet[2798]: I0916 05:03:54.794965 2798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 05:03:54.806531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ccbdfe58f2e938f36fb184f7059133111b65db296ad16aef937f315a013056c-rootfs.mount: Deactivated successfully. Sep 16 05:03:56.646905 kubelet[2798]: E0916 05:03:56.646840 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xxzmm" podUID="d62b9cd3-4af7-4636-9ff3-4ff29ca03a41" Sep 16 05:03:56.806037 containerd[1553]: time="2025-09-16T05:03:56.805989263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 16 05:03:58.643874 kubelet[2798]: E0916 05:03:58.643804 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xxzmm" podUID="d62b9cd3-4af7-4636-9ff3-4ff29ca03a41" Sep 16 05:04:00.056341 containerd[1553]: time="2025-09-16T05:04:00.056269560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:00.057850 containerd[1553]: time="2025-09-16T05:04:00.057593188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 16 05:04:00.059259 containerd[1553]: time="2025-09-16T05:04:00.059211538Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:00.062317 containerd[1553]: time="2025-09-16T05:04:00.062267189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:00.063908 containerd[1553]: time="2025-09-16T05:04:00.063337497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.25729046s" Sep 16 05:04:00.063908 containerd[1553]: time="2025-09-16T05:04:00.063384941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 16 05:04:00.069488 containerd[1553]: time="2025-09-16T05:04:00.069430001Z" level=info msg="CreateContainer within sandbox \"9ea35bc031173180c1c6baf3b506180b9a865ffbeea826be68cfa33af56d849c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 16 05:04:00.085486 containerd[1553]: time="2025-09-16T05:04:00.084265666Z" level=info msg="Container 53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:00.102400 containerd[1553]: time="2025-09-16T05:04:00.102337483Z" level=info msg="CreateContainer within sandbox \"9ea35bc031173180c1c6baf3b506180b9a865ffbeea826be68cfa33af56d849c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d\"" Sep 16 05:04:00.104611 containerd[1553]: time="2025-09-16T05:04:00.103524375Z" level=info msg="StartContainer for \"53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d\"" Sep 16 05:04:00.106481 containerd[1553]: time="2025-09-16T05:04:00.106435624Z" level=info msg="connecting to shim 53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d" address="unix:///run/containerd/s/1b52a3de9859f15adb45f10430c1557a8bd48c168f92e8435a9fff56cd7b5f31" protocol=ttrpc version=3 Sep 16 05:04:00.146825 systemd[1]: Started cri-containerd-53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d.scope - libcontainer container 53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d. Sep 16 05:04:00.210094 containerd[1553]: time="2025-09-16T05:04:00.210052611Z" level=info msg="StartContainer for \"53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d\" returns successfully" Sep 16 05:04:00.645863 kubelet[2798]: E0916 05:04:00.645787 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xxzmm" podUID="d62b9cd3-4af7-4636-9ff3-4ff29ca03a41" Sep 16 05:04:01.251689 containerd[1553]: time="2025-09-16T05:04:01.251596019Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 05:04:01.254984 systemd[1]: cri-containerd-53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d.scope: Deactivated successfully. Sep 16 05:04:01.255906 systemd[1]: cri-containerd-53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d.scope: Consumed 715ms CPU time, 193.7M memory peak, 171.3M written to disk. Sep 16 05:04:01.257094 containerd[1553]: time="2025-09-16T05:04:01.257003492Z" level=info msg="received exit event container_id:\"53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d\" id:\"53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d\" pid:3560 exited_at:{seconds:1757999041 nanos:256694106}" Sep 16 05:04:01.258012 containerd[1553]: time="2025-09-16T05:04:01.257827864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d\" id:\"53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d\" pid:3560 exited_at:{seconds:1757999041 nanos:256694106}" Sep 16 05:04:01.287820 kubelet[2798]: I0916 05:04:01.287786 2798 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 05:04:01.300712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53541cc1519361d17f674d45b711562516f82278ff0e3e3355816887290af28d-rootfs.mount: Deactivated successfully. Sep 16 05:04:01.384629 systemd[1]: Created slice kubepods-burstable-pod7e88b61e_2da8_431f_b467_19b4511c59b5.slice - libcontainer container kubepods-burstable-pod7e88b61e_2da8_431f_b467_19b4511c59b5.slice. Sep 16 05:04:01.455037 kubelet[2798]: I0916 05:04:01.454882 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e88b61e-2da8-431f-b467-19b4511c59b5-config-volume\") pod \"coredns-674b8bbfcf-rc6fx\" (UID: \"7e88b61e-2da8-431f-b467-19b4511c59b5\") " pod="kube-system/coredns-674b8bbfcf-rc6fx" Sep 16 05:04:01.455037 kubelet[2798]: I0916 05:04:01.454959 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgrv7\" (UniqueName: \"kubernetes.io/projected/7e88b61e-2da8-431f-b467-19b4511c59b5-kube-api-access-vgrv7\") pod \"coredns-674b8bbfcf-rc6fx\" (UID: \"7e88b61e-2da8-431f-b467-19b4511c59b5\") " pod="kube-system/coredns-674b8bbfcf-rc6fx" Sep 16 05:04:01.592075 systemd[1]: Created slice kubepods-besteffort-podb47c184d_ee1c_4ffb_9645_68a4632c0a00.slice - libcontainer container kubepods-besteffort-podb47c184d_ee1c_4ffb_9645_68a4632c0a00.slice. Sep 16 05:04:01.656765 kubelet[2798]: I0916 05:04:01.656679 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b47c184d-ee1c-4ffb-9645-68a4632c0a00-whisker-ca-bundle\") pod \"whisker-7874456d44-jc6vl\" (UID: \"b47c184d-ee1c-4ffb-9645-68a4632c0a00\") " pod="calico-system/whisker-7874456d44-jc6vl" Sep 16 05:04:01.656765 kubelet[2798]: I0916 05:04:01.656758 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b47c184d-ee1c-4ffb-9645-68a4632c0a00-whisker-backend-key-pair\") pod \"whisker-7874456d44-jc6vl\" (UID: \"b47c184d-ee1c-4ffb-9645-68a4632c0a00\") " pod="calico-system/whisker-7874456d44-jc6vl" Sep 16 05:04:01.779089 kubelet[2798]: I0916 05:04:01.656805 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxldp\" (UniqueName: \"kubernetes.io/projected/b47c184d-ee1c-4ffb-9645-68a4632c0a00-kube-api-access-rxldp\") pod \"whisker-7874456d44-jc6vl\" (UID: \"b47c184d-ee1c-4ffb-9645-68a4632c0a00\") " pod="calico-system/whisker-7874456d44-jc6vl" Sep 16 05:04:01.782473 containerd[1553]: time="2025-09-16T05:04:01.780689772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rc6fx,Uid:7e88b61e-2da8-431f-b467-19b4511c59b5,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:01.828116 systemd[1]: Created slice kubepods-besteffort-poda3b8128f_d493_4195_97c7_5bfd119ae380.slice - libcontainer container kubepods-besteffort-poda3b8128f_d493_4195_97c7_5bfd119ae380.slice. Sep 16 05:04:01.858409 kubelet[2798]: I0916 05:04:01.858023 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a3b8128f-d493-4195-97c7-5bfd119ae380-calico-apiserver-certs\") pod \"calico-apiserver-66968c6c76-68bn8\" (UID: \"a3b8128f-d493-4195-97c7-5bfd119ae380\") " pod="calico-apiserver/calico-apiserver-66968c6c76-68bn8" Sep 16 05:04:01.858409 kubelet[2798]: I0916 05:04:01.858115 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9fc\" (UniqueName: \"kubernetes.io/projected/a3b8128f-d493-4195-97c7-5bfd119ae380-kube-api-access-pk9fc\") pod \"calico-apiserver-66968c6c76-68bn8\" (UID: \"a3b8128f-d493-4195-97c7-5bfd119ae380\") " pod="calico-apiserver/calico-apiserver-66968c6c76-68bn8" Sep 16 05:04:01.903183 systemd[1]: Created slice kubepods-besteffort-podb3be170f_16c3_4c87_859b_e3dfd34f1535.slice - libcontainer container kubepods-besteffort-podb3be170f_16c3_4c87_859b_e3dfd34f1535.slice. Sep 16 05:04:01.912872 containerd[1553]: time="2025-09-16T05:04:01.912748629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7874456d44-jc6vl,Uid:b47c184d-ee1c-4ffb-9645-68a4632c0a00,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:01.922078 systemd[1]: Created slice kubepods-besteffort-podc5cca2d9_ccc0_4c6f_9a63_e5027c178bbe.slice - libcontainer container kubepods-besteffort-podc5cca2d9_ccc0_4c6f_9a63_e5027c178bbe.slice. Sep 16 05:04:01.951328 systemd[1]: Created slice kubepods-burstable-pod80c0a086_363d_48e6_90a4_f5b72297ad60.slice - libcontainer container kubepods-burstable-pod80c0a086_363d_48e6_90a4_f5b72297ad60.slice. Sep 16 05:04:01.959671 kubelet[2798]: I0916 05:04:01.959220 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cab2cef-f730-4dad-a5fd-db0e5642a330-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-pmjm8\" (UID: \"8cab2cef-f730-4dad-a5fd-db0e5642a330\") " pod="calico-system/goldmane-54d579b49d-pmjm8" Sep 16 05:04:01.959671 kubelet[2798]: I0916 05:04:01.959283 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8dl4\" (UniqueName: \"kubernetes.io/projected/b3be170f-16c3-4c87-859b-e3dfd34f1535-kube-api-access-l8dl4\") pod \"calico-apiserver-66968c6c76-2jqqw\" (UID: \"b3be170f-16c3-4c87-859b-e3dfd34f1535\") " pod="calico-apiserver/calico-apiserver-66968c6c76-2jqqw" Sep 16 05:04:01.959671 kubelet[2798]: I0916 05:04:01.959319 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80c0a086-363d-48e6-90a4-f5b72297ad60-config-volume\") pod \"coredns-674b8bbfcf-gvcht\" (UID: \"80c0a086-363d-48e6-90a4-f5b72297ad60\") " pod="kube-system/coredns-674b8bbfcf-gvcht" Sep 16 05:04:01.959671 kubelet[2798]: I0916 05:04:01.959392 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58cst\" (UniqueName: \"kubernetes.io/projected/80c0a086-363d-48e6-90a4-f5b72297ad60-kube-api-access-58cst\") pod \"coredns-674b8bbfcf-gvcht\" (UID: \"80c0a086-363d-48e6-90a4-f5b72297ad60\") " pod="kube-system/coredns-674b8bbfcf-gvcht" Sep 16 05:04:01.959671 kubelet[2798]: I0916 05:04:01.959426 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlgk4\" (UniqueName: \"kubernetes.io/projected/c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe-kube-api-access-qlgk4\") pod \"calico-kube-controllers-7fc65d5cbc-w5nr9\" (UID: \"c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe\") " pod="calico-system/calico-kube-controllers-7fc65d5cbc-w5nr9" Sep 16 05:04:01.960060 kubelet[2798]: I0916 05:04:01.959457 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe-tigera-ca-bundle\") pod \"calico-kube-controllers-7fc65d5cbc-w5nr9\" (UID: \"c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe\") " pod="calico-system/calico-kube-controllers-7fc65d5cbc-w5nr9" Sep 16 05:04:01.960060 kubelet[2798]: I0916 05:04:01.959482 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8cab2cef-f730-4dad-a5fd-db0e5642a330-goldmane-key-pair\") pod \"goldmane-54d579b49d-pmjm8\" (UID: \"8cab2cef-f730-4dad-a5fd-db0e5642a330\") " pod="calico-system/goldmane-54d579b49d-pmjm8" Sep 16 05:04:01.960060 kubelet[2798]: I0916 05:04:01.959519 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cab2cef-f730-4dad-a5fd-db0e5642a330-config\") pod \"goldmane-54d579b49d-pmjm8\" (UID: \"8cab2cef-f730-4dad-a5fd-db0e5642a330\") " pod="calico-system/goldmane-54d579b49d-pmjm8" Sep 16 05:04:01.960993 kubelet[2798]: I0916 05:04:01.960950 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b3be170f-16c3-4c87-859b-e3dfd34f1535-calico-apiserver-certs\") pod \"calico-apiserver-66968c6c76-2jqqw\" (UID: \"b3be170f-16c3-4c87-859b-e3dfd34f1535\") " pod="calico-apiserver/calico-apiserver-66968c6c76-2jqqw" Sep 16 05:04:01.961100 kubelet[2798]: I0916 05:04:01.961005 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwnx\" (UniqueName: \"kubernetes.io/projected/8cab2cef-f730-4dad-a5fd-db0e5642a330-kube-api-access-knwnx\") pod \"goldmane-54d579b49d-pmjm8\" (UID: \"8cab2cef-f730-4dad-a5fd-db0e5642a330\") " pod="calico-system/goldmane-54d579b49d-pmjm8" Sep 16 05:04:01.982030 systemd[1]: Created slice kubepods-besteffort-pod8cab2cef_f730_4dad_a5fd_db0e5642a330.slice - libcontainer container kubepods-besteffort-pod8cab2cef_f730_4dad_a5fd_db0e5642a330.slice. Sep 16 05:04:02.113089 containerd[1553]: time="2025-09-16T05:04:02.112319980Z" level=error msg="Failed to destroy network for sandbox \"60714c625185cb332a13cd0e9bf928c053b62d0f13f9e30b6d6db3065de856e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.115943 containerd[1553]: time="2025-09-16T05:04:02.115144612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rc6fx,Uid:7e88b61e-2da8-431f-b467-19b4511c59b5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60714c625185cb332a13cd0e9bf928c053b62d0f13f9e30b6d6db3065de856e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.117417 kubelet[2798]: E0916 05:04:02.117359 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60714c625185cb332a13cd0e9bf928c053b62d0f13f9e30b6d6db3065de856e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.118648 kubelet[2798]: E0916 05:04:02.118592 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60714c625185cb332a13cd0e9bf928c053b62d0f13f9e30b6d6db3065de856e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rc6fx" Sep 16 05:04:02.119588 kubelet[2798]: E0916 05:04:02.119414 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60714c625185cb332a13cd0e9bf928c053b62d0f13f9e30b6d6db3065de856e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rc6fx" Sep 16 05:04:02.119588 kubelet[2798]: E0916 05:04:02.119504 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rc6fx_kube-system(7e88b61e-2da8-431f-b467-19b4511c59b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rc6fx_kube-system(7e88b61e-2da8-431f-b467-19b4511c59b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60714c625185cb332a13cd0e9bf928c053b62d0f13f9e30b6d6db3065de856e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rc6fx" podUID="7e88b61e-2da8-431f-b467-19b4511c59b5" Sep 16 05:04:02.134660 containerd[1553]: time="2025-09-16T05:04:02.134313724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66968c6c76-68bn8,Uid:a3b8128f-d493-4195-97c7-5bfd119ae380,Namespace:calico-apiserver,Attempt:0,}" Sep 16 05:04:02.134660 containerd[1553]: time="2025-09-16T05:04:02.134471789Z" level=error msg="Failed to destroy network for sandbox \"73bf45072f11da8171bad110c3d1cb0ab75d4585b8abdd3900f5572d4c8821b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.137483 containerd[1553]: time="2025-09-16T05:04:02.137423123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7874456d44-jc6vl,Uid:b47c184d-ee1c-4ffb-9645-68a4632c0a00,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bf45072f11da8171bad110c3d1cb0ab75d4585b8abdd3900f5572d4c8821b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.137866 kubelet[2798]: E0916 05:04:02.137810 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bf45072f11da8171bad110c3d1cb0ab75d4585b8abdd3900f5572d4c8821b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.138220 kubelet[2798]: E0916 05:04:02.137878 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bf45072f11da8171bad110c3d1cb0ab75d4585b8abdd3900f5572d4c8821b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7874456d44-jc6vl" Sep 16 05:04:02.138220 kubelet[2798]: E0916 05:04:02.137911 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bf45072f11da8171bad110c3d1cb0ab75d4585b8abdd3900f5572d4c8821b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7874456d44-jc6vl" Sep 16 05:04:02.138778 kubelet[2798]: E0916 05:04:02.138002 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7874456d44-jc6vl_calico-system(b47c184d-ee1c-4ffb-9645-68a4632c0a00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7874456d44-jc6vl_calico-system(b47c184d-ee1c-4ffb-9645-68a4632c0a00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73bf45072f11da8171bad110c3d1cb0ab75d4585b8abdd3900f5572d4c8821b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7874456d44-jc6vl" podUID="b47c184d-ee1c-4ffb-9645-68a4632c0a00" Sep 16 05:04:02.210407 containerd[1553]: time="2025-09-16T05:04:02.210313410Z" level=error msg="Failed to destroy network for sandbox \"0c1a871b8db643405ad53884fa6c9b12458a4230a7c80b5670edfff8beff0dfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.212154 containerd[1553]: time="2025-09-16T05:04:02.212043490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66968c6c76-68bn8,Uid:a3b8128f-d493-4195-97c7-5bfd119ae380,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c1a871b8db643405ad53884fa6c9b12458a4230a7c80b5670edfff8beff0dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.212623 kubelet[2798]: E0916 05:04:02.212519 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c1a871b8db643405ad53884fa6c9b12458a4230a7c80b5670edfff8beff0dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.212757 kubelet[2798]: E0916 05:04:02.212628 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c1a871b8db643405ad53884fa6c9b12458a4230a7c80b5670edfff8beff0dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66968c6c76-68bn8" Sep 16 05:04:02.212757 kubelet[2798]: E0916 05:04:02.212666 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c1a871b8db643405ad53884fa6c9b12458a4230a7c80b5670edfff8beff0dfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66968c6c76-68bn8" Sep 16 05:04:02.213263 kubelet[2798]: E0916 05:04:02.212851 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66968c6c76-68bn8_calico-apiserver(a3b8128f-d493-4195-97c7-5bfd119ae380)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66968c6c76-68bn8_calico-apiserver(a3b8128f-d493-4195-97c7-5bfd119ae380)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c1a871b8db643405ad53884fa6c9b12458a4230a7c80b5670edfff8beff0dfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66968c6c76-68bn8" podUID="a3b8128f-d493-4195-97c7-5bfd119ae380" Sep 16 05:04:02.228525 containerd[1553]: time="2025-09-16T05:04:02.228155409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66968c6c76-2jqqw,Uid:b3be170f-16c3-4c87-859b-e3dfd34f1535,Namespace:calico-apiserver,Attempt:0,}" Sep 16 05:04:02.239901 containerd[1553]: time="2025-09-16T05:04:02.239849346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fc65d5cbc-w5nr9,Uid:c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:02.265586 containerd[1553]: time="2025-09-16T05:04:02.265519373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvcht,Uid:80c0a086-363d-48e6-90a4-f5b72297ad60,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:02.303475 containerd[1553]: time="2025-09-16T05:04:02.302410052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-pmjm8,Uid:8cab2cef-f730-4dad-a5fd-db0e5642a330,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:02.332681 systemd[1]: run-netns-cni\x2de36dcdbd\x2dbafe\x2d7964\x2d286e\x2d3b1c4abdab77.mount: Deactivated successfully. Sep 16 05:04:02.446352 containerd[1553]: time="2025-09-16T05:04:02.446053992Z" level=error msg="Failed to destroy network for sandbox \"b037c03d359b7ed45e460c67d1258f8be6b0309abd84f4fe24f4edaeb811770b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.454670 containerd[1553]: time="2025-09-16T05:04:02.454605919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvcht,Uid:80c0a086-363d-48e6-90a4-f5b72297ad60,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b037c03d359b7ed45e460c67d1258f8be6b0309abd84f4fe24f4edaeb811770b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.455592 kubelet[2798]: E0916 05:04:02.455205 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b037c03d359b7ed45e460c67d1258f8be6b0309abd84f4fe24f4edaeb811770b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.455592 kubelet[2798]: E0916 05:04:02.455316 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b037c03d359b7ed45e460c67d1258f8be6b0309abd84f4fe24f4edaeb811770b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gvcht" Sep 16 05:04:02.455592 kubelet[2798]: E0916 05:04:02.455349 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b037c03d359b7ed45e460c67d1258f8be6b0309abd84f4fe24f4edaeb811770b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gvcht" Sep 16 05:04:02.455841 kubelet[2798]: E0916 05:04:02.455435 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gvcht_kube-system(80c0a086-363d-48e6-90a4-f5b72297ad60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gvcht_kube-system(80c0a086-363d-48e6-90a4-f5b72297ad60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b037c03d359b7ed45e460c67d1258f8be6b0309abd84f4fe24f4edaeb811770b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gvcht" podUID="80c0a086-363d-48e6-90a4-f5b72297ad60" Sep 16 05:04:02.457040 systemd[1]: run-netns-cni\x2d0b3a0d23\x2d8d09\x2d0e46\x2d74ad\x2d306a5cc95de2.mount: Deactivated successfully. Sep 16 05:04:02.465761 containerd[1553]: time="2025-09-16T05:04:02.465712032Z" level=error msg="Failed to destroy network for sandbox \"ccb01b1d2681a5348a01db4b56766aabe4feb810158b62a62d1832e693ff5105\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.472054 containerd[1553]: time="2025-09-16T05:04:02.471173347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66968c6c76-2jqqw,Uid:b3be170f-16c3-4c87-859b-e3dfd34f1535,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb01b1d2681a5348a01db4b56766aabe4feb810158b62a62d1832e693ff5105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.472234 kubelet[2798]: E0916 05:04:02.471519 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb01b1d2681a5348a01db4b56766aabe4feb810158b62a62d1832e693ff5105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.472234 kubelet[2798]: E0916 05:04:02.471615 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb01b1d2681a5348a01db4b56766aabe4feb810158b62a62d1832e693ff5105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66968c6c76-2jqqw" Sep 16 05:04:02.472234 kubelet[2798]: E0916 05:04:02.471673 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb01b1d2681a5348a01db4b56766aabe4feb810158b62a62d1832e693ff5105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66968c6c76-2jqqw" Sep 16 05:04:02.473123 systemd[1]: run-netns-cni\x2d53380d54\x2dcbbd\x2d2446\x2d2225\x2d2170efd4238e.mount: Deactivated successfully. Sep 16 05:04:02.474371 kubelet[2798]: E0916 05:04:02.473787 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66968c6c76-2jqqw_calico-apiserver(b3be170f-16c3-4c87-859b-e3dfd34f1535)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66968c6c76-2jqqw_calico-apiserver(b3be170f-16c3-4c87-859b-e3dfd34f1535)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccb01b1d2681a5348a01db4b56766aabe4feb810158b62a62d1832e693ff5105\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66968c6c76-2jqqw" podUID="b3be170f-16c3-4c87-859b-e3dfd34f1535" Sep 16 05:04:02.500901 containerd[1553]: time="2025-09-16T05:04:02.500810649Z" level=error msg="Failed to destroy network for sandbox \"b2e7400c2cf35abca3de2885b12c45b8728c0f91ec89d57dc2d081729cc02c62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.505687 containerd[1553]: time="2025-09-16T05:04:02.505625153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fc65d5cbc-w5nr9,Uid:c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e7400c2cf35abca3de2885b12c45b8728c0f91ec89d57dc2d081729cc02c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.506109 systemd[1]: run-netns-cni\x2dab244e29\x2d49f5\x2d7064\x2da318\x2d4af217d4158d.mount: Deactivated successfully. Sep 16 05:04:02.508499 kubelet[2798]: E0916 05:04:02.508444 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e7400c2cf35abca3de2885b12c45b8728c0f91ec89d57dc2d081729cc02c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.509133 kubelet[2798]: E0916 05:04:02.508538 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e7400c2cf35abca3de2885b12c45b8728c0f91ec89d57dc2d081729cc02c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fc65d5cbc-w5nr9" Sep 16 05:04:02.509133 kubelet[2798]: E0916 05:04:02.508589 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e7400c2cf35abca3de2885b12c45b8728c0f91ec89d57dc2d081729cc02c62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fc65d5cbc-w5nr9" Sep 16 05:04:02.509133 kubelet[2798]: E0916 05:04:02.508678 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fc65d5cbc-w5nr9_calico-system(c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fc65d5cbc-w5nr9_calico-system(c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2e7400c2cf35abca3de2885b12c45b8728c0f91ec89d57dc2d081729cc02c62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fc65d5cbc-w5nr9" podUID="c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe" Sep 16 05:04:02.527186 containerd[1553]: time="2025-09-16T05:04:02.527112176Z" level=error msg="Failed to destroy network for sandbox \"9c0ff8d8cb41f12fe0166517a8d1e417ba9ba520178dc256e24b54528c744258\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.528926 containerd[1553]: time="2025-09-16T05:04:02.528864640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-pmjm8,Uid:8cab2cef-f730-4dad-a5fd-db0e5642a330,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0ff8d8cb41f12fe0166517a8d1e417ba9ba520178dc256e24b54528c744258\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.529326 kubelet[2798]: E0916 05:04:02.529266 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0ff8d8cb41f12fe0166517a8d1e417ba9ba520178dc256e24b54528c744258\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.529440 kubelet[2798]: E0916 05:04:02.529334 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0ff8d8cb41f12fe0166517a8d1e417ba9ba520178dc256e24b54528c744258\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-pmjm8" Sep 16 05:04:02.529440 kubelet[2798]: E0916 05:04:02.529365 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0ff8d8cb41f12fe0166517a8d1e417ba9ba520178dc256e24b54528c744258\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-pmjm8" Sep 16 05:04:02.529642 kubelet[2798]: E0916 05:04:02.529458 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-pmjm8_calico-system(8cab2cef-f730-4dad-a5fd-db0e5642a330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-pmjm8_calico-system(8cab2cef-f730-4dad-a5fd-db0e5642a330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c0ff8d8cb41f12fe0166517a8d1e417ba9ba520178dc256e24b54528c744258\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-pmjm8" podUID="8cab2cef-f730-4dad-a5fd-db0e5642a330" Sep 16 05:04:02.655918 systemd[1]: Created slice kubepods-besteffort-podd62b9cd3_4af7_4636_9ff3_4ff29ca03a41.slice - libcontainer container kubepods-besteffort-podd62b9cd3_4af7_4636_9ff3_4ff29ca03a41.slice. Sep 16 05:04:02.659415 containerd[1553]: time="2025-09-16T05:04:02.659365616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xxzmm,Uid:d62b9cd3-4af7-4636-9ff3-4ff29ca03a41,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:02.732381 containerd[1553]: time="2025-09-16T05:04:02.730933291Z" level=error msg="Failed to destroy network for sandbox \"94f4cf141c692d69fc891ac8cfb7cbc848d10fca17706e0c5424ee29a3c493ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.734013 containerd[1553]: time="2025-09-16T05:04:02.733951394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xxzmm,Uid:d62b9cd3-4af7-4636-9ff3-4ff29ca03a41,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f4cf141c692d69fc891ac8cfb7cbc848d10fca17706e0c5424ee29a3c493ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.734765 kubelet[2798]: E0916 05:04:02.734247 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f4cf141c692d69fc891ac8cfb7cbc848d10fca17706e0c5424ee29a3c493ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 05:04:02.734765 kubelet[2798]: E0916 05:04:02.734348 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f4cf141c692d69fc891ac8cfb7cbc848d10fca17706e0c5424ee29a3c493ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xxzmm" Sep 16 05:04:02.734765 kubelet[2798]: E0916 05:04:02.734383 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f4cf141c692d69fc891ac8cfb7cbc848d10fca17706e0c5424ee29a3c493ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xxzmm" Sep 16 05:04:02.735347 kubelet[2798]: E0916 05:04:02.734464 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xxzmm_calico-system(d62b9cd3-4af7-4636-9ff3-4ff29ca03a41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xxzmm_calico-system(d62b9cd3-4af7-4636-9ff3-4ff29ca03a41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94f4cf141c692d69fc891ac8cfb7cbc848d10fca17706e0c5424ee29a3c493ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xxzmm" podUID="d62b9cd3-4af7-4636-9ff3-4ff29ca03a41" Sep 16 05:04:02.841089 containerd[1553]: time="2025-09-16T05:04:02.840928630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 16 05:04:03.298339 systemd[1]: run-netns-cni\x2dbc760967\x2dadbd\x2daadd\x2de720\x2d05b244cf19bb.mount: Deactivated successfully. Sep 16 05:04:09.857013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3720234710.mount: Deactivated successfully. Sep 16 05:04:09.894367 containerd[1553]: time="2025-09-16T05:04:09.894267356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:09.895680 containerd[1553]: time="2025-09-16T05:04:09.895623878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 16 05:04:09.897186 containerd[1553]: time="2025-09-16T05:04:09.897114427Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:09.900733 containerd[1553]: time="2025-09-16T05:04:09.899714327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:09.900733 containerd[1553]: time="2025-09-16T05:04:09.900577206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.059527152s" Sep 16 05:04:09.900733 containerd[1553]: time="2025-09-16T05:04:09.900615903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 16 05:04:09.928513 containerd[1553]: time="2025-09-16T05:04:09.928351855Z" level=info msg="CreateContainer within sandbox \"9ea35bc031173180c1c6baf3b506180b9a865ffbeea826be68cfa33af56d849c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 16 05:04:09.947583 containerd[1553]: time="2025-09-16T05:04:09.945425821Z" level=info msg="Container c774f9dee76e0ebd649aea175c76b27f0271552acda54b2c2b08895a7e59aac0: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:09.961967 containerd[1553]: time="2025-09-16T05:04:09.961908269Z" level=info msg="CreateContainer within sandbox \"9ea35bc031173180c1c6baf3b506180b9a865ffbeea826be68cfa33af56d849c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c774f9dee76e0ebd649aea175c76b27f0271552acda54b2c2b08895a7e59aac0\"" Sep 16 05:04:09.963305 containerd[1553]: time="2025-09-16T05:04:09.963238937Z" level=info msg="StartContainer for \"c774f9dee76e0ebd649aea175c76b27f0271552acda54b2c2b08895a7e59aac0\"" Sep 16 05:04:09.967221 containerd[1553]: time="2025-09-16T05:04:09.967178801Z" level=info msg="connecting to shim c774f9dee76e0ebd649aea175c76b27f0271552acda54b2c2b08895a7e59aac0" address="unix:///run/containerd/s/1b52a3de9859f15adb45f10430c1557a8bd48c168f92e8435a9fff56cd7b5f31" protocol=ttrpc version=3 Sep 16 05:04:09.993764 systemd[1]: Started cri-containerd-c774f9dee76e0ebd649aea175c76b27f0271552acda54b2c2b08895a7e59aac0.scope - libcontainer container c774f9dee76e0ebd649aea175c76b27f0271552acda54b2c2b08895a7e59aac0. Sep 16 05:04:10.068054 containerd[1553]: time="2025-09-16T05:04:10.067992677Z" level=info msg="StartContainer for \"c774f9dee76e0ebd649aea175c76b27f0271552acda54b2c2b08895a7e59aac0\" returns successfully" Sep 16 05:04:10.192208 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 16 05:04:10.192391 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 16 05:04:10.432860 kubelet[2798]: I0916 05:04:10.432788 2798 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b47c184d-ee1c-4ffb-9645-68a4632c0a00-whisker-ca-bundle\") pod \"b47c184d-ee1c-4ffb-9645-68a4632c0a00\" (UID: \"b47c184d-ee1c-4ffb-9645-68a4632c0a00\") " Sep 16 05:04:10.433444 kubelet[2798]: I0916 05:04:10.432890 2798 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b47c184d-ee1c-4ffb-9645-68a4632c0a00-whisker-backend-key-pair\") pod \"b47c184d-ee1c-4ffb-9645-68a4632c0a00\" (UID: \"b47c184d-ee1c-4ffb-9645-68a4632c0a00\") " Sep 16 05:04:10.433444 kubelet[2798]: I0916 05:04:10.432928 2798 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxldp\" (UniqueName: \"kubernetes.io/projected/b47c184d-ee1c-4ffb-9645-68a4632c0a00-kube-api-access-rxldp\") pod \"b47c184d-ee1c-4ffb-9645-68a4632c0a00\" (UID: \"b47c184d-ee1c-4ffb-9645-68a4632c0a00\") " Sep 16 05:04:10.435369 kubelet[2798]: I0916 05:04:10.435210 2798 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b47c184d-ee1c-4ffb-9645-68a4632c0a00-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b47c184d-ee1c-4ffb-9645-68a4632c0a00" (UID: "b47c184d-ee1c-4ffb-9645-68a4632c0a00"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 05:04:10.439745 kubelet[2798]: I0916 05:04:10.439666 2798 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b47c184d-ee1c-4ffb-9645-68a4632c0a00-kube-api-access-rxldp" (OuterVolumeSpecName: "kube-api-access-rxldp") pod "b47c184d-ee1c-4ffb-9645-68a4632c0a00" (UID: "b47c184d-ee1c-4ffb-9645-68a4632c0a00"). InnerVolumeSpecName "kube-api-access-rxldp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:04:10.441748 kubelet[2798]: I0916 05:04:10.441684 2798 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47c184d-ee1c-4ffb-9645-68a4632c0a00-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b47c184d-ee1c-4ffb-9645-68a4632c0a00" (UID: "b47c184d-ee1c-4ffb-9645-68a4632c0a00"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 05:04:10.534143 kubelet[2798]: I0916 05:04:10.534060 2798 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b47c184d-ee1c-4ffb-9645-68a4632c0a00-whisker-backend-key-pair\") on node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" DevicePath \"\"" Sep 16 05:04:10.535006 kubelet[2798]: I0916 05:04:10.534464 2798 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rxldp\" (UniqueName: \"kubernetes.io/projected/b47c184d-ee1c-4ffb-9645-68a4632c0a00-kube-api-access-rxldp\") on node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" DevicePath \"\"" Sep 16 05:04:10.535006 kubelet[2798]: I0916 05:04:10.534706 2798 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b47c184d-ee1c-4ffb-9645-68a4632c0a00-whisker-ca-bundle\") on node \"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f\" DevicePath \"\"" Sep 16 05:04:10.653810 systemd[1]: Removed slice kubepods-besteffort-podb47c184d_ee1c_4ffb_9645_68a4632c0a00.slice - libcontainer container kubepods-besteffort-podb47c184d_ee1c_4ffb_9645_68a4632c0a00.slice. Sep 16 05:04:10.860757 systemd[1]: var-lib-kubelet-pods-b47c184d\x2dee1c\x2d4ffb\x2d9645\x2d68a4632c0a00-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 16 05:04:10.860926 systemd[1]: var-lib-kubelet-pods-b47c184d\x2dee1c\x2d4ffb\x2d9645\x2d68a4632c0a00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drxldp.mount: Deactivated successfully. Sep 16 05:04:10.922596 kubelet[2798]: I0916 05:04:10.922488 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c5w9x" podStartSLOduration=2.329375497 podStartE2EDuration="20.922464362s" podCreationTimestamp="2025-09-16 05:03:50 +0000 UTC" firstStartedPulling="2025-09-16 05:03:51.308825081 +0000 UTC m=+22.905461260" lastFinishedPulling="2025-09-16 05:04:09.901913925 +0000 UTC m=+41.498550125" observedRunningTime="2025-09-16 05:04:10.902232409 +0000 UTC m=+42.498868609" watchObservedRunningTime="2025-09-16 05:04:10.922464362 +0000 UTC m=+42.519100566" Sep 16 05:04:10.986520 systemd[1]: Created slice kubepods-besteffort-poda4aa7de2_a3f4_40fe_80bb_73021f3e6264.slice - libcontainer container kubepods-besteffort-poda4aa7de2_a3f4_40fe_80bb_73021f3e6264.slice. Sep 16 05:04:11.037691 kubelet[2798]: I0916 05:04:11.037634 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8c8x\" (UniqueName: \"kubernetes.io/projected/a4aa7de2-a3f4-40fe-80bb-73021f3e6264-kube-api-access-t8c8x\") pod \"whisker-7d48784779-95s4x\" (UID: \"a4aa7de2-a3f4-40fe-80bb-73021f3e6264\") " pod="calico-system/whisker-7d48784779-95s4x" Sep 16 05:04:11.037691 kubelet[2798]: I0916 05:04:11.037703 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4aa7de2-a3f4-40fe-80bb-73021f3e6264-whisker-backend-key-pair\") pod \"whisker-7d48784779-95s4x\" (UID: \"a4aa7de2-a3f4-40fe-80bb-73021f3e6264\") " pod="calico-system/whisker-7d48784779-95s4x" Sep 16 05:04:11.037965 kubelet[2798]: I0916 05:04:11.037732 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4aa7de2-a3f4-40fe-80bb-73021f3e6264-whisker-ca-bundle\") pod \"whisker-7d48784779-95s4x\" (UID: \"a4aa7de2-a3f4-40fe-80bb-73021f3e6264\") " pod="calico-system/whisker-7d48784779-95s4x" Sep 16 05:04:11.292763 containerd[1553]: time="2025-09-16T05:04:11.292613269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d48784779-95s4x,Uid:a4aa7de2-a3f4-40fe-80bb-73021f3e6264,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:11.438395 systemd-networkd[1439]: cali8911d2a4bea: Link UP Sep 16 05:04:11.442596 systemd-networkd[1439]: cali8911d2a4bea: Gained carrier Sep 16 05:04:11.465780 containerd[1553]: 2025-09-16 05:04:11.332 [INFO][3891] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 05:04:11.465780 containerd[1553]: 2025-09-16 05:04:11.345 [INFO][3891] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0 whisker-7d48784779- calico-system a4aa7de2-a3f4-40fe-80bb-73021f3e6264 929 0 2025-09-16 05:04:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d48784779 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f whisker-7d48784779-95s4x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8911d2a4bea [] [] }} ContainerID="9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" Namespace="calico-system" Pod="whisker-7d48784779-95s4x" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-" Sep 16 05:04:11.465780 containerd[1553]: 2025-09-16 05:04:11.345 [INFO][3891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" Namespace="calico-system" Pod="whisker-7d48784779-95s4x" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0" Sep 16 05:04:11.465780 containerd[1553]: 2025-09-16 05:04:11.378 [INFO][3904] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" HandleID="k8s-pod-network.9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0" Sep 16 05:04:11.466144 containerd[1553]: 2025-09-16 05:04:11.378 [INFO][3904] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" HandleID="k8s-pod-network.9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5100), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", "pod":"whisker-7d48784779-95s4x", "timestamp":"2025-09-16 05:04:11.378491932 +0000 UTC"}, Hostname:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:11.466144 containerd[1553]: 2025-09-16 05:04:11.378 [INFO][3904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:11.466144 containerd[1553]: 2025-09-16 05:04:11.379 [INFO][3904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:11.466144 containerd[1553]: 2025-09-16 05:04:11.379 [INFO][3904] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f' Sep 16 05:04:11.466144 containerd[1553]: 2025-09-16 05:04:11.390 [INFO][3904] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:11.466144 containerd[1553]: 2025-09-16 05:04:11.396 [INFO][3904] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:11.466144 containerd[1553]: 2025-09-16 05:04:11.401 [INFO][3904] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:11.466144 containerd[1553]: 2025-09-16 05:04:11.403 [INFO][3904] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:11.466623 containerd[1553]: 2025-09-16 05:04:11.405 [INFO][3904] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:11.466623 containerd[1553]: 2025-09-16 05:04:11.405 [INFO][3904] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:11.466623 containerd[1553]: 2025-09-16 05:04:11.407 [INFO][3904] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14 Sep 16 05:04:11.466623 containerd[1553]: 2025-09-16 05:04:11.414 [INFO][3904] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:11.466623 containerd[1553]: 2025-09-16 05:04:11.421 [INFO][3904] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.65/26] block=192.168.55.64/26 handle="k8s-pod-network.9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:11.466623 containerd[1553]: 2025-09-16 05:04:11.421 [INFO][3904] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.65/26] handle="k8s-pod-network.9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:11.466623 containerd[1553]: 2025-09-16 05:04:11.422 [INFO][3904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:11.466623 containerd[1553]: 2025-09-16 05:04:11.422 [INFO][3904] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.65/26] IPv6=[] ContainerID="9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" HandleID="k8s-pod-network.9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0" Sep 16 05:04:11.467015 containerd[1553]: 2025-09-16 05:04:11.425 [INFO][3891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" Namespace="calico-system" Pod="whisker-7d48784779-95s4x" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0", GenerateName:"whisker-7d48784779-", Namespace:"calico-system", SelfLink:"", UID:"a4aa7de2-a3f4-40fe-80bb-73021f3e6264", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d48784779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"", Pod:"whisker-7d48784779-95s4x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.55.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8911d2a4bea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:11.467151 containerd[1553]: 2025-09-16 05:04:11.425 [INFO][3891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.65/32] ContainerID="9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" Namespace="calico-system" Pod="whisker-7d48784779-95s4x" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0" Sep 16 05:04:11.467151 containerd[1553]: 2025-09-16 05:04:11.425 [INFO][3891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8911d2a4bea ContainerID="9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" Namespace="calico-system" Pod="whisker-7d48784779-95s4x" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0" Sep 16 05:04:11.467151 containerd[1553]: 2025-09-16 05:04:11.438 [INFO][3891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" Namespace="calico-system" Pod="whisker-7d48784779-95s4x" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0" Sep 16 05:04:11.467320 containerd[1553]: 2025-09-16 05:04:11.439 [INFO][3891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" Namespace="calico-system" Pod="whisker-7d48784779-95s4x" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0", GenerateName:"whisker-7d48784779-", Namespace:"calico-system", SelfLink:"", UID:"a4aa7de2-a3f4-40fe-80bb-73021f3e6264", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d48784779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14", Pod:"whisker-7d48784779-95s4x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.55.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8911d2a4bea", MAC:"3e:21:1c:98:86:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:11.467467 containerd[1553]: 2025-09-16 05:04:11.461 [INFO][3891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" Namespace="calico-system" Pod="whisker-7d48784779-95s4x" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-whisker--7d48784779--95s4x-eth0" Sep 16 05:04:11.497255 containerd[1553]: time="2025-09-16T05:04:11.497133938Z" level=info msg="connecting to shim 9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14" address="unix:///run/containerd/s/2a4e18ddd58d894f6eea83585750a6d8cb2880c643d1fd8608496adc9b958c33" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:11.529813 systemd[1]: Started cri-containerd-9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14.scope - libcontainer container 9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14. Sep 16 05:04:11.599161 containerd[1553]: time="2025-09-16T05:04:11.599104784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d48784779-95s4x,Uid:a4aa7de2-a3f4-40fe-80bb-73021f3e6264,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14\"" Sep 16 05:04:11.602549 containerd[1553]: time="2025-09-16T05:04:11.601761070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 16 05:04:12.595537 systemd-networkd[1439]: vxlan.calico: Link UP Sep 16 05:04:12.596628 systemd-networkd[1439]: vxlan.calico: Gained carrier Sep 16 05:04:12.650253 containerd[1553]: time="2025-09-16T05:04:12.648831445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66968c6c76-68bn8,Uid:a3b8128f-d493-4195-97c7-5bfd119ae380,Namespace:calico-apiserver,Attempt:0,}" Sep 16 05:04:12.655074 kubelet[2798]: I0916 05:04:12.654728 2798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b47c184d-ee1c-4ffb-9645-68a4632c0a00" path="/var/lib/kubelet/pods/b47c184d-ee1c-4ffb-9645-68a4632c0a00/volumes" Sep 16 05:04:12.876047 containerd[1553]: time="2025-09-16T05:04:12.874665923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:12.886584 containerd[1553]: time="2025-09-16T05:04:12.884460278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 16 05:04:12.892949 containerd[1553]: time="2025-09-16T05:04:12.887427061Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:12.904924 containerd[1553]: time="2025-09-16T05:04:12.904049248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.302243137s" Sep 16 05:04:12.904924 containerd[1553]: time="2025-09-16T05:04:12.904147818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 16 05:04:12.905145 containerd[1553]: time="2025-09-16T05:04:12.904975366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:12.918275 containerd[1553]: time="2025-09-16T05:04:12.917826149Z" level=info msg="CreateContainer within sandbox \"9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 16 05:04:12.936786 containerd[1553]: time="2025-09-16T05:04:12.936736152Z" level=info msg="Container 3de8dd2b9d8eda0dc4c42e0e3842ec69f89cda2a23d8ec8d9b327cc71ea640e5: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:12.960391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773881312.mount: Deactivated successfully. Sep 16 05:04:12.987578 containerd[1553]: time="2025-09-16T05:04:12.987343902Z" level=info msg="CreateContainer within sandbox \"9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3de8dd2b9d8eda0dc4c42e0e3842ec69f89cda2a23d8ec8d9b327cc71ea640e5\"" Sep 16 05:04:12.993305 systemd-networkd[1439]: calib61dce90d67: Link UP Sep 16 05:04:12.996013 systemd-networkd[1439]: calib61dce90d67: Gained carrier Sep 16 05:04:12.999252 containerd[1553]: time="2025-09-16T05:04:12.999200638Z" level=info msg="StartContainer for \"3de8dd2b9d8eda0dc4c42e0e3842ec69f89cda2a23d8ec8d9b327cc71ea640e5\"" Sep 16 05:04:13.002765 containerd[1553]: time="2025-09-16T05:04:13.002715771Z" level=info msg="connecting to shim 3de8dd2b9d8eda0dc4c42e0e3842ec69f89cda2a23d8ec8d9b327cc71ea640e5" address="unix:///run/containerd/s/2a4e18ddd58d894f6eea83585750a6d8cb2880c643d1fd8608496adc9b958c33" protocol=ttrpc version=3 Sep 16 05:04:13.031657 containerd[1553]: 2025-09-16 05:04:12.806 [INFO][4118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0 calico-apiserver-66968c6c76- calico-apiserver a3b8128f-d493-4195-97c7-5bfd119ae380 856 0 2025-09-16 05:03:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66968c6c76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f calico-apiserver-66968c6c76-68bn8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib61dce90d67 [] [] }} ContainerID="3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-68bn8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-" Sep 16 05:04:13.031657 containerd[1553]: 2025-09-16 05:04:12.806 [INFO][4118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-68bn8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0" Sep 16 05:04:13.031657 containerd[1553]: 2025-09-16 05:04:12.880 [INFO][4139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" HandleID="k8s-pod-network.3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0" Sep 16 05:04:13.031993 containerd[1553]: 2025-09-16 05:04:12.881 [INFO][4139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" HandleID="k8s-pod-network.3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", "pod":"calico-apiserver-66968c6c76-68bn8", "timestamp":"2025-09-16 05:04:12.880908508 +0000 UTC"}, Hostname:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:13.031993 containerd[1553]: 2025-09-16 05:04:12.881 [INFO][4139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:13.031993 containerd[1553]: 2025-09-16 05:04:12.881 [INFO][4139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:13.031993 containerd[1553]: 2025-09-16 05:04:12.881 [INFO][4139] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f' Sep 16 05:04:13.031993 containerd[1553]: 2025-09-16 05:04:12.903 [INFO][4139] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.031993 containerd[1553]: 2025-09-16 05:04:12.914 [INFO][4139] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.031993 containerd[1553]: 2025-09-16 05:04:12.920 [INFO][4139] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.031993 containerd[1553]: 2025-09-16 05:04:12.924 [INFO][4139] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.032909 containerd[1553]: 2025-09-16 05:04:12.929 [INFO][4139] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.032909 containerd[1553]: 2025-09-16 05:04:12.930 [INFO][4139] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.032909 containerd[1553]: 2025-09-16 05:04:12.938 [INFO][4139] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0 Sep 16 05:04:13.032909 containerd[1553]: 2025-09-16 05:04:12.946 [INFO][4139] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.032909 containerd[1553]: 2025-09-16 05:04:12.957 [INFO][4139] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.66/26] block=192.168.55.64/26 handle="k8s-pod-network.3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.032909 containerd[1553]: 2025-09-16 05:04:12.957 [INFO][4139] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.66/26] handle="k8s-pod-network.3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.032909 containerd[1553]: 2025-09-16 05:04:12.957 [INFO][4139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:13.032909 containerd[1553]: 2025-09-16 05:04:12.957 [INFO][4139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.66/26] IPv6=[] ContainerID="3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" HandleID="k8s-pod-network.3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0" Sep 16 05:04:13.034229 containerd[1553]: 2025-09-16 05:04:12.966 [INFO][4118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-68bn8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0", GenerateName:"calico-apiserver-66968c6c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3b8128f-d493-4195-97c7-5bfd119ae380", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66968c6c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"", Pod:"calico-apiserver-66968c6c76-68bn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib61dce90d67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:13.034359 containerd[1553]: 2025-09-16 05:04:12.967 [INFO][4118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.66/32] ContainerID="3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-68bn8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0" Sep 16 05:04:13.034359 containerd[1553]: 2025-09-16 05:04:12.967 [INFO][4118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib61dce90d67 ContainerID="3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-68bn8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0" Sep 16 05:04:13.034359 containerd[1553]: 2025-09-16 05:04:12.996 [INFO][4118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-68bn8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0" Sep 16 05:04:13.034519 containerd[1553]: 2025-09-16 05:04:12.996 [INFO][4118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-68bn8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0", GenerateName:"calico-apiserver-66968c6c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3b8128f-d493-4195-97c7-5bfd119ae380", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66968c6c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0", Pod:"calico-apiserver-66968c6c76-68bn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib61dce90d67", MAC:"92:c2:f0:2a:1f:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:13.034519 containerd[1553]: 2025-09-16 05:04:13.022 [INFO][4118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-68bn8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--68bn8-eth0" Sep 16 05:04:13.059796 systemd[1]: Started cri-containerd-3de8dd2b9d8eda0dc4c42e0e3842ec69f89cda2a23d8ec8d9b327cc71ea640e5.scope - libcontainer container 3de8dd2b9d8eda0dc4c42e0e3842ec69f89cda2a23d8ec8d9b327cc71ea640e5. Sep 16 05:04:13.086608 containerd[1553]: time="2025-09-16T05:04:13.086486539Z" level=info msg="connecting to shim 3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0" address="unix:///run/containerd/s/75001916bbe3e481528d2de14b403ac59b6f157a77e479efc573883e2b639113" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:13.137807 systemd[1]: Started cri-containerd-3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0.scope - libcontainer container 3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0. Sep 16 05:04:13.241043 containerd[1553]: time="2025-09-16T05:04:13.240992905Z" level=info msg="StartContainer for \"3de8dd2b9d8eda0dc4c42e0e3842ec69f89cda2a23d8ec8d9b327cc71ea640e5\" returns successfully" Sep 16 05:04:13.243842 containerd[1553]: time="2025-09-16T05:04:13.243797844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 16 05:04:13.256722 systemd-networkd[1439]: cali8911d2a4bea: Gained IPv6LL Sep 16 05:04:13.326506 containerd[1553]: time="2025-09-16T05:04:13.326451550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66968c6c76-68bn8,Uid:a3b8128f-d493-4195-97c7-5bfd119ae380,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0\"" Sep 16 05:04:13.645032 containerd[1553]: time="2025-09-16T05:04:13.644591946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fc65d5cbc-w5nr9,Uid:c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:13.787023 systemd-networkd[1439]: cali4a8b86f13cb: Link UP Sep 16 05:04:13.787351 systemd-networkd[1439]: cali4a8b86f13cb: Gained carrier Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.694 [INFO][4266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0 calico-kube-controllers-7fc65d5cbc- calico-system c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe 861 0 2025-09-16 05:03:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fc65d5cbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f calico-kube-controllers-7fc65d5cbc-w5nr9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4a8b86f13cb [] [] }} ContainerID="c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" Namespace="calico-system" Pod="calico-kube-controllers-7fc65d5cbc-w5nr9" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.695 [INFO][4266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" Namespace="calico-system" Pod="calico-kube-controllers-7fc65d5cbc-w5nr9" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.732 [INFO][4278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" HandleID="k8s-pod-network.c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.732 [INFO][4278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" HandleID="k8s-pod-network.c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", "pod":"calico-kube-controllers-7fc65d5cbc-w5nr9", "timestamp":"2025-09-16 05:04:13.73224463 +0000 UTC"}, Hostname:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.732 [INFO][4278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.732 [INFO][4278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.732 [INFO][4278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f' Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.743 [INFO][4278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.751 [INFO][4278] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.759 [INFO][4278] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.762 [INFO][4278] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.765 [INFO][4278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.765 [INFO][4278] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.767 [INFO][4278] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988 Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.771 [INFO][4278] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.778 [INFO][4278] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.67/26] block=192.168.55.64/26 handle="k8s-pod-network.c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.778 [INFO][4278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.67/26] handle="k8s-pod-network.c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.778 [INFO][4278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:13.808362 containerd[1553]: 2025-09-16 05:04:13.778 [INFO][4278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.67/26] IPv6=[] ContainerID="c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" HandleID="k8s-pod-network.c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0" Sep 16 05:04:13.811364 containerd[1553]: 2025-09-16 05:04:13.781 [INFO][4266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" Namespace="calico-system" Pod="calico-kube-controllers-7fc65d5cbc-w5nr9" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0", GenerateName:"calico-kube-controllers-7fc65d5cbc-", Namespace:"calico-system", SelfLink:"", UID:"c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fc65d5cbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"", Pod:"calico-kube-controllers-7fc65d5cbc-w5nr9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a8b86f13cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:13.811364 containerd[1553]: 2025-09-16 05:04:13.781 [INFO][4266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.67/32] ContainerID="c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" Namespace="calico-system" Pod="calico-kube-controllers-7fc65d5cbc-w5nr9" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0" Sep 16 05:04:13.811364 containerd[1553]: 2025-09-16 05:04:13.781 [INFO][4266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a8b86f13cb ContainerID="c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" Namespace="calico-system" Pod="calico-kube-controllers-7fc65d5cbc-w5nr9" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0" Sep 16 05:04:13.811364 containerd[1553]: 2025-09-16 05:04:13.784 [INFO][4266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" Namespace="calico-system" Pod="calico-kube-controllers-7fc65d5cbc-w5nr9" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0" Sep 16 05:04:13.811364 containerd[1553]: 2025-09-16 05:04:13.785 [INFO][4266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" Namespace="calico-system" Pod="calico-kube-controllers-7fc65d5cbc-w5nr9" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0", GenerateName:"calico-kube-controllers-7fc65d5cbc-", Namespace:"calico-system", SelfLink:"", UID:"c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fc65d5cbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988", Pod:"calico-kube-controllers-7fc65d5cbc-w5nr9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a8b86f13cb", MAC:"b6:92:03:d9:22:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:13.811364 containerd[1553]: 2025-09-16 05:04:13.802 [INFO][4266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" Namespace="calico-system" Pod="calico-kube-controllers-7fc65d5cbc-w5nr9" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--kube--controllers--7fc65d5cbc--w5nr9-eth0" Sep 16 05:04:13.848933 containerd[1553]: time="2025-09-16T05:04:13.848836457Z" level=info msg="connecting to shim c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988" address="unix:///run/containerd/s/c8492bd57ec160462b97e3e07ef4d24f9219605987d467e333aa2566746b75fe" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:13.881879 systemd[1]: Started cri-containerd-c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988.scope - libcontainer container c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988. Sep 16 05:04:13.976658 containerd[1553]: time="2025-09-16T05:04:13.976536338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fc65d5cbc-w5nr9,Uid:c5cca2d9-ccc0-4c6f-9a63-e5027c178bbe,Namespace:calico-system,Attempt:0,} returns sandbox id \"c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988\"" Sep 16 05:04:14.217462 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL Sep 16 05:04:14.602345 systemd-networkd[1439]: calib61dce90d67: Gained IPv6LL Sep 16 05:04:14.648719 containerd[1553]: time="2025-09-16T05:04:14.648656757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xxzmm,Uid:d62b9cd3-4af7-4636-9ff3-4ff29ca03a41,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:14.928672 systemd-networkd[1439]: calic513090aa6a: Link UP Sep 16 05:04:14.931029 systemd-networkd[1439]: calic513090aa6a: Gained carrier Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.761 [INFO][4346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0 csi-node-driver- calico-system d62b9cd3-4af7-4636-9ff3-4ff29ca03a41 744 0 2025-09-16 05:03:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f csi-node-driver-xxzmm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic513090aa6a [] [] }} ContainerID="74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" Namespace="calico-system" Pod="csi-node-driver-xxzmm" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.761 [INFO][4346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" Namespace="calico-system" Pod="csi-node-driver-xxzmm" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.839 [INFO][4363] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" HandleID="k8s-pod-network.74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.840 [INFO][4363] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" HandleID="k8s-pod-network.74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7100), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", "pod":"csi-node-driver-xxzmm", "timestamp":"2025-09-16 05:04:14.83984379 +0000 UTC"}, Hostname:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.840 [INFO][4363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.840 [INFO][4363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.840 [INFO][4363] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f' Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.854 [INFO][4363] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.862 [INFO][4363] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.872 [INFO][4363] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.875 [INFO][4363] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.881 [INFO][4363] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.881 [INFO][4363] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.885 [INFO][4363] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5 Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.891 [INFO][4363] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.911 [INFO][4363] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.68/26] block=192.168.55.64/26 handle="k8s-pod-network.74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.911 [INFO][4363] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.68/26] handle="k8s-pod-network.74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.911 [INFO][4363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:14.968975 containerd[1553]: 2025-09-16 05:04:14.911 [INFO][4363] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.68/26] IPv6=[] ContainerID="74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" HandleID="k8s-pod-network.74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0" Sep 16 05:04:14.972052 containerd[1553]: 2025-09-16 05:04:14.917 [INFO][4346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" Namespace="calico-system" Pod="csi-node-driver-xxzmm" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d62b9cd3-4af7-4636-9ff3-4ff29ca03a41", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"", Pod:"csi-node-driver-xxzmm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic513090aa6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:14.972052 containerd[1553]: 2025-09-16 05:04:14.917 [INFO][4346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.68/32] ContainerID="74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" Namespace="calico-system" Pod="csi-node-driver-xxzmm" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0" Sep 16 05:04:14.972052 containerd[1553]: 2025-09-16 05:04:14.918 [INFO][4346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic513090aa6a ContainerID="74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" Namespace="calico-system" Pod="csi-node-driver-xxzmm" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0" Sep 16 05:04:14.972052 containerd[1553]: 2025-09-16 05:04:14.930 [INFO][4346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" Namespace="calico-system" Pod="csi-node-driver-xxzmm" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0" Sep 16 05:04:14.972052 containerd[1553]: 2025-09-16 05:04:14.931 [INFO][4346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" Namespace="calico-system" Pod="csi-node-driver-xxzmm" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d62b9cd3-4af7-4636-9ff3-4ff29ca03a41", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5", Pod:"csi-node-driver-xxzmm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic513090aa6a", MAC:"12:b2:11:32:ce:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:14.972052 containerd[1553]: 2025-09-16 05:04:14.955 [INFO][4346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" Namespace="calico-system" Pod="csi-node-driver-xxzmm" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-csi--node--driver--xxzmm-eth0" Sep 16 05:04:15.052595 containerd[1553]: time="2025-09-16T05:04:15.051182771Z" level=info msg="connecting to shim 74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5" address="unix:///run/containerd/s/3636e11f6620878619d727073e1cab75c35cb401fcea3e47a8159abc930b8aab" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:15.123593 systemd[1]: Started cri-containerd-74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5.scope - libcontainer container 74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5. Sep 16 05:04:15.201434 containerd[1553]: time="2025-09-16T05:04:15.201295037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xxzmm,Uid:d62b9cd3-4af7-4636-9ff3-4ff29ca03a41,Namespace:calico-system,Attempt:0,} returns sandbox id \"74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5\"" Sep 16 05:04:15.283297 kubelet[2798]: I0916 05:04:15.283154 2798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 05:04:15.460085 containerd[1553]: time="2025-09-16T05:04:15.459514877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c774f9dee76e0ebd649aea175c76b27f0271552acda54b2c2b08895a7e59aac0\" id:\"ae9214c62e801a889cb08fce0fff510b2dc747498c44b0e6f2b6456dfb7e8d7b\" pid:4437 exited_at:{seconds:1757999055 nanos:459168310}" Sep 16 05:04:15.561481 systemd-networkd[1439]: cali4a8b86f13cb: Gained IPv6LL Sep 16 05:04:15.658345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485242626.mount: Deactivated successfully. Sep 16 05:04:15.670234 containerd[1553]: time="2025-09-16T05:04:15.670183094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c774f9dee76e0ebd649aea175c76b27f0271552acda54b2c2b08895a7e59aac0\" id:\"45405b8e72a9b3e404e14cb723435faf10d5a6680c1f8caa094fa514fcff3e21\" pid:4461 exited_at:{seconds:1757999055 nanos:669879490}" Sep 16 05:04:15.683349 containerd[1553]: time="2025-09-16T05:04:15.683258112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:15.684442 containerd[1553]: time="2025-09-16T05:04:15.684381533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 16 05:04:15.686606 containerd[1553]: time="2025-09-16T05:04:15.685522273Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:15.688886 containerd[1553]: time="2025-09-16T05:04:15.688842209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:15.690058 containerd[1553]: time="2025-09-16T05:04:15.689970183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.446115318s" Sep 16 05:04:15.690217 containerd[1553]: time="2025-09-16T05:04:15.690187266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 16 05:04:15.691779 containerd[1553]: time="2025-09-16T05:04:15.691745819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 05:04:15.697365 containerd[1553]: time="2025-09-16T05:04:15.697306520Z" level=info msg="CreateContainer within sandbox \"9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 16 05:04:15.705798 containerd[1553]: time="2025-09-16T05:04:15.705762204Z" level=info msg="Container 6d19ba1486b856e7ca0df87777f6f4e85207a83dc285cb3f05bc9e9c688d90b4: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:15.719418 containerd[1553]: time="2025-09-16T05:04:15.719218414Z" level=info msg="CreateContainer within sandbox \"9b63ec594e681d79978a00de20563d10a54ab7e7789787782a417e8478c4ea14\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6d19ba1486b856e7ca0df87777f6f4e85207a83dc285cb3f05bc9e9c688d90b4\"" Sep 16 05:04:15.720638 containerd[1553]: time="2025-09-16T05:04:15.720533871Z" level=info msg="StartContainer for \"6d19ba1486b856e7ca0df87777f6f4e85207a83dc285cb3f05bc9e9c688d90b4\"" Sep 16 05:04:15.723349 containerd[1553]: time="2025-09-16T05:04:15.723308530Z" level=info msg="connecting to shim 6d19ba1486b856e7ca0df87777f6f4e85207a83dc285cb3f05bc9e9c688d90b4" address="unix:///run/containerd/s/2a4e18ddd58d894f6eea83585750a6d8cb2880c643d1fd8608496adc9b958c33" protocol=ttrpc version=3 Sep 16 05:04:15.754017 systemd[1]: Started cri-containerd-6d19ba1486b856e7ca0df87777f6f4e85207a83dc285cb3f05bc9e9c688d90b4.scope - libcontainer container 6d19ba1486b856e7ca0df87777f6f4e85207a83dc285cb3f05bc9e9c688d90b4. Sep 16 05:04:15.829305 containerd[1553]: time="2025-09-16T05:04:15.829183475Z" level=info msg="StartContainer for \"6d19ba1486b856e7ca0df87777f6f4e85207a83dc285cb3f05bc9e9c688d90b4\" returns successfully" Sep 16 05:04:15.933503 kubelet[2798]: I0916 05:04:15.933431 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7d48784779-95s4x" podStartSLOduration=1.843122127 podStartE2EDuration="5.933409311s" podCreationTimestamp="2025-09-16 05:04:10 +0000 UTC" firstStartedPulling="2025-09-16 05:04:11.601185739 +0000 UTC m=+43.197821914" lastFinishedPulling="2025-09-16 05:04:15.691472897 +0000 UTC m=+47.288109098" observedRunningTime="2025-09-16 05:04:15.932897826 +0000 UTC m=+47.529534028" watchObservedRunningTime="2025-09-16 05:04:15.933409311 +0000 UTC m=+47.530045514" Sep 16 05:04:16.521968 systemd-networkd[1439]: calic513090aa6a: Gained IPv6LL Sep 16 05:04:16.647672 containerd[1553]: time="2025-09-16T05:04:16.647189061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66968c6c76-2jqqw,Uid:b3be170f-16c3-4c87-859b-e3dfd34f1535,Namespace:calico-apiserver,Attempt:0,}" Sep 16 05:04:16.666474 containerd[1553]: time="2025-09-16T05:04:16.666401073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rc6fx,Uid:7e88b61e-2da8-431f-b467-19b4511c59b5,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:17.080919 systemd-networkd[1439]: cali1c5249a2952: Link UP Sep 16 05:04:17.083264 systemd-networkd[1439]: cali1c5249a2952: Gained carrier Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:16.872 [INFO][4520] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0 coredns-674b8bbfcf- kube-system 7e88b61e-2da8-431f-b467-19b4511c59b5 849 0 2025-09-16 05:03:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f coredns-674b8bbfcf-rc6fx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1c5249a2952 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" Namespace="kube-system" Pod="coredns-674b8bbfcf-rc6fx" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:16.872 [INFO][4520] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" Namespace="kube-system" Pod="coredns-674b8bbfcf-rc6fx" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:16.955 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" HandleID="k8s-pod-network.57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:16.956 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" HandleID="k8s-pod-network.57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310070), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", "pod":"coredns-674b8bbfcf-rc6fx", "timestamp":"2025-09-16 05:04:16.955722932 +0000 UTC"}, Hostname:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:16.957 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:16.957 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:16.957 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f' Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:16.990 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:16.998 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:17.014 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:17.020 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:17.024 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:17.024 [INFO][4537] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:17.027 [INFO][4537] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:17.039 [INFO][4537] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:17.055 [INFO][4537] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.69/26] block=192.168.55.64/26 handle="k8s-pod-network.57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:17.055 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.69/26] handle="k8s-pod-network.57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:17.055 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:17.131196 containerd[1553]: 2025-09-16 05:04:17.056 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.69/26] IPv6=[] ContainerID="57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" HandleID="k8s-pod-network.57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0" Sep 16 05:04:17.133528 containerd[1553]: 2025-09-16 05:04:17.063 [INFO][4520] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" Namespace="kube-system" Pod="coredns-674b8bbfcf-rc6fx" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7e88b61e-2da8-431f-b467-19b4511c59b5", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"", Pod:"coredns-674b8bbfcf-rc6fx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c5249a2952", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:17.133528 containerd[1553]: 2025-09-16 05:04:17.063 [INFO][4520] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.69/32] ContainerID="57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" Namespace="kube-system" Pod="coredns-674b8bbfcf-rc6fx" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0" Sep 16 05:04:17.133528 containerd[1553]: 2025-09-16 05:04:17.064 [INFO][4520] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c5249a2952 ContainerID="57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" Namespace="kube-system" Pod="coredns-674b8bbfcf-rc6fx" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0" Sep 16 05:04:17.133528 containerd[1553]: 2025-09-16 05:04:17.088 [INFO][4520] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" Namespace="kube-system" Pod="coredns-674b8bbfcf-rc6fx" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0" Sep 16 05:04:17.133528 containerd[1553]: 2025-09-16 05:04:17.095 [INFO][4520] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" Namespace="kube-system" Pod="coredns-674b8bbfcf-rc6fx" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7e88b61e-2da8-431f-b467-19b4511c59b5", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce", Pod:"coredns-674b8bbfcf-rc6fx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c5249a2952", MAC:"de:13:86:60:2e:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:17.133528 containerd[1553]: 2025-09-16 05:04:17.122 [INFO][4520] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" Namespace="kube-system" Pod="coredns-674b8bbfcf-rc6fx" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--rc6fx-eth0" Sep 16 05:04:17.213595 systemd-networkd[1439]: cali820272e6bd8: Link UP Sep 16 05:04:17.216532 systemd-networkd[1439]: cali820272e6bd8: Gained carrier Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:16.867 [INFO][4508] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0 calico-apiserver-66968c6c76- calico-apiserver b3be170f-16c3-4c87-859b-e3dfd34f1535 860 0 2025-09-16 05:03:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66968c6c76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f calico-apiserver-66968c6c76-2jqqw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali820272e6bd8 [] [] }} ContainerID="ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-2jqqw" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:16.867 [INFO][4508] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-2jqqw" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:16.975 [INFO][4535] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" HandleID="k8s-pod-network.ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:16.977 [INFO][4535] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" HandleID="k8s-pod-network.ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038b370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", "pod":"calico-apiserver-66968c6c76-2jqqw", "timestamp":"2025-09-16 05:04:16.975403348 +0000 UTC"}, Hostname:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:16.977 [INFO][4535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.055 [INFO][4535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.056 [INFO][4535] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f' Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.116 [INFO][4535] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.136 [INFO][4535] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.149 [INFO][4535] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.154 [INFO][4535] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.161 [INFO][4535] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.161 [INFO][4535] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.163 [INFO][4535] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.172 [INFO][4535] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.197 [INFO][4535] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.70/26] block=192.168.55.64/26 handle="k8s-pod-network.ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.197 [INFO][4535] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.70/26] handle="k8s-pod-network.ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.198 [INFO][4535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:17.259948 containerd[1553]: 2025-09-16 05:04:17.198 [INFO][4535] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.70/26] IPv6=[] ContainerID="ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" HandleID="k8s-pod-network.ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0" Sep 16 05:04:17.262456 containerd[1553]: 2025-09-16 05:04:17.205 [INFO][4508] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-2jqqw" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0", GenerateName:"calico-apiserver-66968c6c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3be170f-16c3-4c87-859b-e3dfd34f1535", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66968c6c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"", Pod:"calico-apiserver-66968c6c76-2jqqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali820272e6bd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:17.262456 containerd[1553]: 2025-09-16 05:04:17.205 [INFO][4508] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.70/32] ContainerID="ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-2jqqw" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0" Sep 16 05:04:17.262456 containerd[1553]: 2025-09-16 05:04:17.205 [INFO][4508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali820272e6bd8 ContainerID="ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-2jqqw" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0" Sep 16 05:04:17.262456 containerd[1553]: 2025-09-16 05:04:17.215 [INFO][4508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-2jqqw" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0" Sep 16 05:04:17.262456 containerd[1553]: 2025-09-16 05:04:17.217 [INFO][4508] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-2jqqw" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0", GenerateName:"calico-apiserver-66968c6c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3be170f-16c3-4c87-859b-e3dfd34f1535", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66968c6c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d", Pod:"calico-apiserver-66968c6c76-2jqqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali820272e6bd8", MAC:"86:93:cf:f9:6f:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:17.262456 containerd[1553]: 2025-09-16 05:04:17.252 [INFO][4508] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" Namespace="calico-apiserver" Pod="calico-apiserver-66968c6c76-2jqqw" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-calico--apiserver--66968c6c76--2jqqw-eth0" Sep 16 05:04:17.351844 containerd[1553]: time="2025-09-16T05:04:17.351702856Z" level=info msg="connecting to shim ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d" address="unix:///run/containerd/s/868e52e65b398e0ca102ab36b7c4aac5b179c67ab5399b3cf5f669c717786148" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:17.364955 containerd[1553]: time="2025-09-16T05:04:17.364898944Z" level=info msg="connecting to shim 57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce" address="unix:///run/containerd/s/6c792ff38cee30b4c088c54c2bea71f29bd5a19b665ff35027ac2bfda6d614a8" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:17.465612 systemd[1]: Started cri-containerd-ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d.scope - libcontainer container ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d. Sep 16 05:04:17.522034 systemd[1]: Started cri-containerd-57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce.scope - libcontainer container 57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce. Sep 16 05:04:17.645900 containerd[1553]: time="2025-09-16T05:04:17.645707769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvcht,Uid:80c0a086-363d-48e6-90a4-f5b72297ad60,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:17.652030 containerd[1553]: time="2025-09-16T05:04:17.651711251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-pmjm8,Uid:8cab2cef-f730-4dad-a5fd-db0e5642a330,Namespace:calico-system,Attempt:0,}" Sep 16 05:04:17.898632 containerd[1553]: time="2025-09-16T05:04:17.896687018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rc6fx,Uid:7e88b61e-2da8-431f-b467-19b4511c59b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce\"" Sep 16 05:04:17.943155 containerd[1553]: time="2025-09-16T05:04:17.943084386Z" level=info msg="CreateContainer within sandbox \"57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 05:04:17.985547 containerd[1553]: time="2025-09-16T05:04:17.985144237Z" level=info msg="Container a5cd96f749e6a57979552e7554622dcf3974087b70534b4305d6e9015e84489d: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:18.016108 containerd[1553]: time="2025-09-16T05:04:18.016050778Z" level=info msg="CreateContainer within sandbox \"57331dc5e5458ea47a4f3fd24a38d6f619488ba7b8e68e305073904965a12bce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5cd96f749e6a57979552e7554622dcf3974087b70534b4305d6e9015e84489d\"" Sep 16 05:04:18.018860 containerd[1553]: time="2025-09-16T05:04:18.018803835Z" level=info msg="StartContainer for \"a5cd96f749e6a57979552e7554622dcf3974087b70534b4305d6e9015e84489d\"" Sep 16 05:04:18.020709 containerd[1553]: time="2025-09-16T05:04:18.020640810Z" level=info msg="connecting to shim a5cd96f749e6a57979552e7554622dcf3974087b70534b4305d6e9015e84489d" address="unix:///run/containerd/s/6c792ff38cee30b4c088c54c2bea71f29bd5a19b665ff35027ac2bfda6d614a8" protocol=ttrpc version=3 Sep 16 05:04:18.077236 systemd[1]: Started cri-containerd-a5cd96f749e6a57979552e7554622dcf3974087b70534b4305d6e9015e84489d.scope - libcontainer container a5cd96f749e6a57979552e7554622dcf3974087b70534b4305d6e9015e84489d. Sep 16 05:04:18.260621 containerd[1553]: time="2025-09-16T05:04:18.259930367Z" level=info msg="StartContainer for \"a5cd96f749e6a57979552e7554622dcf3974087b70534b4305d6e9015e84489d\" returns successfully" Sep 16 05:04:18.349825 systemd-networkd[1439]: califf508cb1bd4: Link UP Sep 16 05:04:18.352720 systemd-networkd[1439]: califf508cb1bd4: Gained carrier Sep 16 05:04:18.378803 systemd-networkd[1439]: cali1c5249a2952: Gained IPv6LL Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:17.959 [INFO][4645] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0 coredns-674b8bbfcf- kube-system 80c0a086-363d-48e6-90a4-f5b72297ad60 863 0 2025-09-16 05:03:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f coredns-674b8bbfcf-gvcht eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf508cb1bd4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvcht" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:17.961 [INFO][4645] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvcht" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.156 [INFO][4687] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" HandleID="k8s-pod-network.a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.156 [INFO][4687] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" HandleID="k8s-pod-network.a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000223af0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", "pod":"coredns-674b8bbfcf-gvcht", "timestamp":"2025-09-16 05:04:18.15631977 +0000 UTC"}, Hostname:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.156 [INFO][4687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.156 [INFO][4687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.156 [INFO][4687] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f' Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.195 [INFO][4687] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.204 [INFO][4687] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.242 [INFO][4687] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.251 [INFO][4687] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.261 [INFO][4687] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.262 [INFO][4687] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.273 [INFO][4687] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.297 [INFO][4687] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.322 [INFO][4687] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.71/26] block=192.168.55.64/26 handle="k8s-pod-network.a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.323 [INFO][4687] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.71/26] handle="k8s-pod-network.a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.323 [INFO][4687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:18.439085 containerd[1553]: 2025-09-16 05:04:18.323 [INFO][4687] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.71/26] IPv6=[] ContainerID="a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" HandleID="k8s-pod-network.a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0" Sep 16 05:04:18.440430 containerd[1553]: 2025-09-16 05:04:18.329 [INFO][4645] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvcht" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"80c0a086-363d-48e6-90a4-f5b72297ad60", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"", Pod:"coredns-674b8bbfcf-gvcht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf508cb1bd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:18.440430 containerd[1553]: 2025-09-16 05:04:18.335 [INFO][4645] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.71/32] ContainerID="a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvcht" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0" Sep 16 05:04:18.440430 containerd[1553]: 2025-09-16 05:04:18.336 [INFO][4645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf508cb1bd4 ContainerID="a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvcht" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0" Sep 16 05:04:18.440430 containerd[1553]: 2025-09-16 05:04:18.360 [INFO][4645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvcht" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0" Sep 16 05:04:18.440430 containerd[1553]: 2025-09-16 05:04:18.362 [INFO][4645] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvcht" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"80c0a086-363d-48e6-90a4-f5b72297ad60", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f", Pod:"coredns-674b8bbfcf-gvcht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf508cb1bd4", MAC:"42:a0:35:ea:ef:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:18.440430 containerd[1553]: 2025-09-16 05:04:18.433 [INFO][4645] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvcht" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-coredns--674b8bbfcf--gvcht-eth0" Sep 16 05:04:18.521361 containerd[1553]: time="2025-09-16T05:04:18.519862462Z" level=info msg="connecting to shim a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f" address="unix:///run/containerd/s/5853067658fb7c635035cb8c992a6a4ffb1dd600f7751fc7336ca041cfe19c71" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:18.621748 systemd[1]: Started cri-containerd-a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f.scope - libcontainer container a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f. Sep 16 05:04:18.684301 containerd[1553]: time="2025-09-16T05:04:18.684206629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66968c6c76-2jqqw,Uid:b3be170f-16c3-4c87-859b-e3dfd34f1535,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d\"" Sep 16 05:04:18.740452 systemd-networkd[1439]: cali9bebbddbc32: Link UP Sep 16 05:04:18.742723 systemd-networkd[1439]: cali9bebbddbc32: Gained carrier Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:17.936 [INFO][4649] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0 goldmane-54d579b49d- calico-system 8cab2cef-f730-4dad-a5fd-db0e5642a330 862 0 2025-09-16 05:03:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f goldmane-54d579b49d-pmjm8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9bebbddbc32 [] [] }} ContainerID="c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" Namespace="calico-system" Pod="goldmane-54d579b49d-pmjm8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:17.937 [INFO][4649] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" Namespace="calico-system" Pod="goldmane-54d579b49d-pmjm8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.171 [INFO][4682] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" HandleID="k8s-pod-network.c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.172 [INFO][4682] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" HandleID="k8s-pod-network.c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c20b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", "pod":"goldmane-54d579b49d-pmjm8", "timestamp":"2025-09-16 05:04:18.171117654 +0000 UTC"}, Hostname:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.173 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.323 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.324 [INFO][4682] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f' Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.436 [INFO][4682] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.492 [INFO][4682] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.524 [INFO][4682] ipam/ipam.go 511: Trying affinity for 192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.531 [INFO][4682] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.543 [INFO][4682] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.64/26 host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.547 [INFO][4682] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.55.64/26 handle="k8s-pod-network.c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.566 [INFO][4682] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.590 [INFO][4682] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.55.64/26 handle="k8s-pod-network.c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.710 [INFO][4682] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.55.72/26] block=192.168.55.64/26 handle="k8s-pod-network.c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.710 [INFO][4682] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.72/26] handle="k8s-pod-network.c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" host="ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f" Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.710 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 05:04:18.865756 containerd[1553]: 2025-09-16 05:04:18.710 [INFO][4682] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.72/26] IPv6=[] ContainerID="c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" HandleID="k8s-pod-network.c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" Workload="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0" Sep 16 05:04:18.868902 containerd[1553]: 2025-09-16 05:04:18.726 [INFO][4649] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" Namespace="calico-system" Pod="goldmane-54d579b49d-pmjm8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"8cab2cef-f730-4dad-a5fd-db0e5642a330", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"", Pod:"goldmane-54d579b49d-pmjm8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9bebbddbc32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:18.868902 containerd[1553]: 2025-09-16 05:04:18.728 [INFO][4649] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.72/32] ContainerID="c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" Namespace="calico-system" Pod="goldmane-54d579b49d-pmjm8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0" Sep 16 05:04:18.868902 containerd[1553]: 2025-09-16 05:04:18.728 [INFO][4649] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bebbddbc32 ContainerID="c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" Namespace="calico-system" Pod="goldmane-54d579b49d-pmjm8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0" Sep 16 05:04:18.868902 containerd[1553]: 2025-09-16 05:04:18.745 [INFO][4649] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" Namespace="calico-system" Pod="goldmane-54d579b49d-pmjm8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0" Sep 16 05:04:18.868902 containerd[1553]: 2025-09-16 05:04:18.748 [INFO][4649] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" Namespace="calico-system" Pod="goldmane-54d579b49d-pmjm8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"8cab2cef-f730-4dad-a5fd-db0e5642a330", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 5, 3, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-nightly-20250915-2100-66e1cc8ae4469428973f", ContainerID:"c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c", Pod:"goldmane-54d579b49d-pmjm8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9bebbddbc32", MAC:"62:94:17:fd:ef:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 05:04:18.868902 containerd[1553]: 2025-09-16 05:04:18.861 [INFO][4649] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" Namespace="calico-system" Pod="goldmane-54d579b49d-pmjm8" WorkloadEndpoint="ci--4459--0--0--nightly--20250915--2100--66e1cc8ae4469428973f-k8s-goldmane--54d579b49d--pmjm8-eth0" Sep 16 05:04:18.888951 systemd-networkd[1439]: cali820272e6bd8: Gained IPv6LL Sep 16 05:04:18.961628 containerd[1553]: time="2025-09-16T05:04:18.961578963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvcht,Uid:80c0a086-363d-48e6-90a4-f5b72297ad60,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f\"" Sep 16 05:04:18.988681 containerd[1553]: time="2025-09-16T05:04:18.984544730Z" level=info msg="CreateContainer within sandbox \"a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 05:04:19.000780 containerd[1553]: time="2025-09-16T05:04:19.000717085Z" level=info msg="connecting to shim c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c" address="unix:///run/containerd/s/3b3c51fb0a6f723d04a0f5ac5090a348e746169d066e8e1851c4c74ed75dacab" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:19.024961 containerd[1553]: time="2025-09-16T05:04:19.024902981Z" level=info msg="Container 7ee546c9223a6b97d3ef62172bfb3823e28b8c4377de7a36b5fa0cf589af3a23: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:19.050636 containerd[1553]: time="2025-09-16T05:04:19.049719955Z" level=info msg="CreateContainer within sandbox \"a3add1470d01bb2f495648efcbfb79bfed438948155861c7a70a568cfa96409f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ee546c9223a6b97d3ef62172bfb3823e28b8c4377de7a36b5fa0cf589af3a23\"" Sep 16 05:04:19.052687 containerd[1553]: time="2025-09-16T05:04:19.052643049Z" level=info msg="StartContainer for \"7ee546c9223a6b97d3ef62172bfb3823e28b8c4377de7a36b5fa0cf589af3a23\"" Sep 16 05:04:19.059905 containerd[1553]: time="2025-09-16T05:04:19.059860521Z" level=info msg="connecting to shim 7ee546c9223a6b97d3ef62172bfb3823e28b8c4377de7a36b5fa0cf589af3a23" address="unix:///run/containerd/s/5853067658fb7c635035cb8c992a6a4ffb1dd600f7751fc7336ca041cfe19c71" protocol=ttrpc version=3 Sep 16 05:04:19.157630 systemd[1]: Started cri-containerd-c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c.scope - libcontainer container c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c. Sep 16 05:04:19.205041 systemd[1]: Started cri-containerd-7ee546c9223a6b97d3ef62172bfb3823e28b8c4377de7a36b5fa0cf589af3a23.scope - libcontainer container 7ee546c9223a6b97d3ef62172bfb3823e28b8c4377de7a36b5fa0cf589af3a23. Sep 16 05:04:19.252637 kubelet[2798]: I0916 05:04:19.251860 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rc6fx" podStartSLOduration=44.251831746 podStartE2EDuration="44.251831746s" podCreationTimestamp="2025-09-16 05:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:04:19.090505614 +0000 UTC m=+50.687141819" watchObservedRunningTime="2025-09-16 05:04:19.251831746 +0000 UTC m=+50.848467962" Sep 16 05:04:19.361265 containerd[1553]: time="2025-09-16T05:04:19.361219067Z" level=info msg="StartContainer for \"7ee546c9223a6b97d3ef62172bfb3823e28b8c4377de7a36b5fa0cf589af3a23\" returns successfully" Sep 16 05:04:19.784789 systemd-networkd[1439]: califf508cb1bd4: Gained IPv6LL Sep 16 05:04:19.994153 containerd[1553]: time="2025-09-16T05:04:19.994079226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-pmjm8,Uid:8cab2cef-f730-4dad-a5fd-db0e5642a330,Namespace:calico-system,Attempt:0,} returns sandbox id \"c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c\"" Sep 16 05:04:20.360855 systemd-networkd[1439]: cali9bebbddbc32: Gained IPv6LL Sep 16 05:04:20.859973 containerd[1553]: time="2025-09-16T05:04:20.859525200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:20.862591 containerd[1553]: time="2025-09-16T05:04:20.862507306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 16 05:04:20.871507 containerd[1553]: time="2025-09-16T05:04:20.871100292Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:20.876033 containerd[1553]: time="2025-09-16T05:04:20.875988910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:20.877510 containerd[1553]: time="2025-09-16T05:04:20.877454374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.185661663s" Sep 16 05:04:20.877510 containerd[1553]: time="2025-09-16T05:04:20.877504628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 05:04:20.884059 containerd[1553]: time="2025-09-16T05:04:20.884028888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 16 05:04:20.887579 containerd[1553]: time="2025-09-16T05:04:20.887282753Z" level=info msg="CreateContainer within sandbox \"3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 05:04:20.899751 containerd[1553]: time="2025-09-16T05:04:20.899711525Z" level=info msg="Container 4d3af8a0bc31ae021e17b2d24d08fc3d995b42cba824af4d8f72df5d05c1fa44: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:20.918357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount74780312.mount: Deactivated successfully. Sep 16 05:04:20.926244 containerd[1553]: time="2025-09-16T05:04:20.926161891Z" level=info msg="CreateContainer within sandbox \"3bf550936d38442284070e560b54446abfe3fa5e4cccdc419440502ea619caa0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4d3af8a0bc31ae021e17b2d24d08fc3d995b42cba824af4d8f72df5d05c1fa44\"" Sep 16 05:04:20.927914 containerd[1553]: time="2025-09-16T05:04:20.927871508Z" level=info msg="StartContainer for \"4d3af8a0bc31ae021e17b2d24d08fc3d995b42cba824af4d8f72df5d05c1fa44\"" Sep 16 05:04:20.931045 containerd[1553]: time="2025-09-16T05:04:20.930973120Z" level=info msg="connecting to shim 4d3af8a0bc31ae021e17b2d24d08fc3d995b42cba824af4d8f72df5d05c1fa44" address="unix:///run/containerd/s/75001916bbe3e481528d2de14b403ac59b6f157a77e479efc573883e2b639113" protocol=ttrpc version=3 Sep 16 05:04:20.996118 systemd[1]: Started cri-containerd-4d3af8a0bc31ae021e17b2d24d08fc3d995b42cba824af4d8f72df5d05c1fa44.scope - libcontainer container 4d3af8a0bc31ae021e17b2d24d08fc3d995b42cba824af4d8f72df5d05c1fa44. Sep 16 05:04:21.052229 kubelet[2798]: I0916 05:04:21.051951 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gvcht" podStartSLOduration=46.051923656 podStartE2EDuration="46.051923656s" podCreationTimestamp="2025-09-16 05:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:04:20.045853416 +0000 UTC m=+51.642489686" watchObservedRunningTime="2025-09-16 05:04:21.051923656 +0000 UTC m=+52.648559853" Sep 16 05:04:21.174310 containerd[1553]: time="2025-09-16T05:04:21.173067900Z" level=info msg="StartContainer for \"4d3af8a0bc31ae021e17b2d24d08fc3d995b42cba824af4d8f72df5d05c1fa44\" returns successfully" Sep 16 05:04:22.072660 kubelet[2798]: I0916 05:04:22.072576 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66968c6c76-68bn8" podStartSLOduration=28.520339168 podStartE2EDuration="36.072507807s" podCreationTimestamp="2025-09-16 05:03:46 +0000 UTC" firstStartedPulling="2025-09-16 05:04:13.329429292 +0000 UTC m=+44.926065484" lastFinishedPulling="2025-09-16 05:04:20.881597929 +0000 UTC m=+52.478234123" observedRunningTime="2025-09-16 05:04:22.070027676 +0000 UTC m=+53.666663877" watchObservedRunningTime="2025-09-16 05:04:22.072507807 +0000 UTC m=+53.669144007" Sep 16 05:04:22.368379 ntpd[1623]: Listen normally on 6 vxlan.calico 192.168.55.64:123 Sep 16 05:04:22.371829 ntpd[1623]: 16 Sep 05:04:22 ntpd[1623]: Listen normally on 6 vxlan.calico 192.168.55.64:123 Sep 16 05:04:22.371829 ntpd[1623]: 16 Sep 05:04:22 ntpd[1623]: Listen normally on 7 cali8911d2a4bea [fe80::ecee:eeff:feee:eeee%4]:123 Sep 16 05:04:22.371829 ntpd[1623]: 16 Sep 05:04:22 ntpd[1623]: Listen normally on 8 vxlan.calico [fe80::642b:5cff:fe15:5a82%5]:123 Sep 16 05:04:22.371829 ntpd[1623]: 16 Sep 05:04:22 ntpd[1623]: Listen normally on 9 calib61dce90d67 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 16 05:04:22.371829 ntpd[1623]: 16 Sep 05:04:22 ntpd[1623]: Listen normally on 10 cali4a8b86f13cb [fe80::ecee:eeff:feee:eeee%9]:123 Sep 16 05:04:22.371829 ntpd[1623]: 16 Sep 05:04:22 ntpd[1623]: Listen normally on 11 calic513090aa6a [fe80::ecee:eeff:feee:eeee%10]:123 Sep 16 05:04:22.371829 ntpd[1623]: 16 Sep 05:04:22 ntpd[1623]: Listen normally on 12 cali1c5249a2952 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 16 05:04:22.371829 ntpd[1623]: 16 Sep 05:04:22 ntpd[1623]: Listen normally on 13 cali820272e6bd8 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 16 05:04:22.371829 ntpd[1623]: 16 Sep 05:04:22 ntpd[1623]: Listen normally on 14 califf508cb1bd4 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 16 05:04:22.371829 ntpd[1623]: 16 Sep 05:04:22 ntpd[1623]: Listen normally on 15 cali9bebbddbc32 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 16 05:04:22.368467 ntpd[1623]: Listen normally on 7 cali8911d2a4bea [fe80::ecee:eeff:feee:eeee%4]:123 Sep 16 05:04:22.368511 ntpd[1623]: Listen normally on 8 vxlan.calico [fe80::642b:5cff:fe15:5a82%5]:123 Sep 16 05:04:22.370627 ntpd[1623]: Listen normally on 9 calib61dce90d67 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 16 05:04:22.370694 ntpd[1623]: Listen normally on 10 cali4a8b86f13cb [fe80::ecee:eeff:feee:eeee%9]:123 Sep 16 05:04:22.370735 ntpd[1623]: Listen normally on 11 calic513090aa6a [fe80::ecee:eeff:feee:eeee%10]:123 Sep 16 05:04:22.370785 ntpd[1623]: Listen normally on 12 cali1c5249a2952 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 16 05:04:22.370824 ntpd[1623]: Listen normally on 13 cali820272e6bd8 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 16 05:04:22.370864 ntpd[1623]: Listen normally on 14 califf508cb1bd4 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 16 05:04:22.370901 ntpd[1623]: Listen normally on 15 cali9bebbddbc32 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 16 05:04:23.045492 kubelet[2798]: I0916 05:04:23.045148 2798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 05:04:24.747096 containerd[1553]: time="2025-09-16T05:04:24.747034265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:24.749907 containerd[1553]: time="2025-09-16T05:04:24.749853807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 16 05:04:24.751038 containerd[1553]: time="2025-09-16T05:04:24.750966331Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:24.757630 containerd[1553]: time="2025-09-16T05:04:24.756618947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:24.760738 containerd[1553]: time="2025-09-16T05:04:24.760277844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.874850957s" Sep 16 05:04:24.760738 containerd[1553]: time="2025-09-16T05:04:24.760324036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 16 05:04:24.762872 containerd[1553]: time="2025-09-16T05:04:24.762605203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 16 05:04:24.801278 containerd[1553]: time="2025-09-16T05:04:24.800920895Z" level=info msg="CreateContainer within sandbox \"c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 16 05:04:24.830610 containerd[1553]: time="2025-09-16T05:04:24.826882773Z" level=info msg="Container 79a73db95b8a77386abd30851f6d9e0e1495c07416ddabaa4bbd207409fb03ab: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:24.844647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083120486.mount: Deactivated successfully. Sep 16 05:04:24.856846 containerd[1553]: time="2025-09-16T05:04:24.856653570Z" level=info msg="CreateContainer within sandbox \"c387e2de1da419e7a0d88ec2e7ca1023b377f0c99a1fd37b58099f6b0f3a2988\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"79a73db95b8a77386abd30851f6d9e0e1495c07416ddabaa4bbd207409fb03ab\"" Sep 16 05:04:24.860539 containerd[1553]: time="2025-09-16T05:04:24.858740780Z" level=info msg="StartContainer for \"79a73db95b8a77386abd30851f6d9e0e1495c07416ddabaa4bbd207409fb03ab\"" Sep 16 05:04:24.860539 containerd[1553]: time="2025-09-16T05:04:24.860450541Z" level=info msg="connecting to shim 79a73db95b8a77386abd30851f6d9e0e1495c07416ddabaa4bbd207409fb03ab" address="unix:///run/containerd/s/c8492bd57ec160462b97e3e07ef4d24f9219605987d467e333aa2566746b75fe" protocol=ttrpc version=3 Sep 16 05:04:24.914670 systemd[1]: Started cri-containerd-79a73db95b8a77386abd30851f6d9e0e1495c07416ddabaa4bbd207409fb03ab.scope - libcontainer container 79a73db95b8a77386abd30851f6d9e0e1495c07416ddabaa4bbd207409fb03ab. Sep 16 05:04:25.271887 containerd[1553]: time="2025-09-16T05:04:25.271828129Z" level=info msg="StartContainer for \"79a73db95b8a77386abd30851f6d9e0e1495c07416ddabaa4bbd207409fb03ab\" returns successfully" Sep 16 05:04:25.982749 containerd[1553]: time="2025-09-16T05:04:25.982681241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:25.985522 containerd[1553]: time="2025-09-16T05:04:25.985413478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 16 05:04:25.986066 containerd[1553]: time="2025-09-16T05:04:25.986006065Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:25.990746 containerd[1553]: time="2025-09-16T05:04:25.990651288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:25.992343 containerd[1553]: time="2025-09-16T05:04:25.992264485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.229619302s" Sep 16 05:04:25.993128 containerd[1553]: time="2025-09-16T05:04:25.992870370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 16 05:04:25.995434 containerd[1553]: time="2025-09-16T05:04:25.995117546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 05:04:26.001026 containerd[1553]: time="2025-09-16T05:04:26.000995362Z" level=info msg="CreateContainer within sandbox \"74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 16 05:04:26.014804 containerd[1553]: time="2025-09-16T05:04:26.014759130Z" level=info msg="Container be94bf3dbbdf956a81d12f70773ba999481644ebca5b2b3c68245b221e0ece74: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:26.033682 containerd[1553]: time="2025-09-16T05:04:26.033535700Z" level=info msg="CreateContainer within sandbox \"74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"be94bf3dbbdf956a81d12f70773ba999481644ebca5b2b3c68245b221e0ece74\"" Sep 16 05:04:26.035799 containerd[1553]: time="2025-09-16T05:04:26.035715478Z" level=info msg="StartContainer for \"be94bf3dbbdf956a81d12f70773ba999481644ebca5b2b3c68245b221e0ece74\"" Sep 16 05:04:26.040050 containerd[1553]: time="2025-09-16T05:04:26.039952457Z" level=info msg="connecting to shim be94bf3dbbdf956a81d12f70773ba999481644ebca5b2b3c68245b221e0ece74" address="unix:///run/containerd/s/3636e11f6620878619d727073e1cab75c35cb401fcea3e47a8159abc930b8aab" protocol=ttrpc version=3 Sep 16 05:04:26.089236 systemd[1]: Started cri-containerd-be94bf3dbbdf956a81d12f70773ba999481644ebca5b2b3c68245b221e0ece74.scope - libcontainer container be94bf3dbbdf956a81d12f70773ba999481644ebca5b2b3c68245b221e0ece74. Sep 16 05:04:26.213661 kubelet[2798]: I0916 05:04:26.213456 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fc65d5cbc-w5nr9" podStartSLOduration=24.430384716 podStartE2EDuration="35.213409714s" podCreationTimestamp="2025-09-16 05:03:51 +0000 UTC" firstStartedPulling="2025-09-16 05:04:13.978902186 +0000 UTC m=+45.575538387" lastFinishedPulling="2025-09-16 05:04:24.761927195 +0000 UTC m=+56.358563385" observedRunningTime="2025-09-16 05:04:26.19214256 +0000 UTC m=+57.788778761" watchObservedRunningTime="2025-09-16 05:04:26.213409714 +0000 UTC m=+57.810045914" Sep 16 05:04:26.215603 containerd[1553]: time="2025-09-16T05:04:26.215254436Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:26.231794 containerd[1553]: time="2025-09-16T05:04:26.231743528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 16 05:04:26.242359 containerd[1553]: time="2025-09-16T05:04:26.242228187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 247.036168ms" Sep 16 05:04:26.243409 containerd[1553]: time="2025-09-16T05:04:26.243237718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 05:04:26.247162 containerd[1553]: time="2025-09-16T05:04:26.246860147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 16 05:04:26.253870 containerd[1553]: time="2025-09-16T05:04:26.252766623Z" level=info msg="CreateContainer within sandbox \"ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 05:04:26.264545 containerd[1553]: time="2025-09-16T05:04:26.264484263Z" level=info msg="Container 71a7a4214c26ae753d9bb05d16b5f778694a34c91b2261c2476f19c22a8b4325: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:26.286000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925735878.mount: Deactivated successfully. Sep 16 05:04:26.287589 containerd[1553]: time="2025-09-16T05:04:26.287513901Z" level=info msg="CreateContainer within sandbox \"ff64dd0bd7e1f07f48a0217352a76cd1450ec3667ac2e491865d994c054f026d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"71a7a4214c26ae753d9bb05d16b5f778694a34c91b2261c2476f19c22a8b4325\"" Sep 16 05:04:26.289056 containerd[1553]: time="2025-09-16T05:04:26.288786638Z" level=info msg="StartContainer for \"71a7a4214c26ae753d9bb05d16b5f778694a34c91b2261c2476f19c22a8b4325\"" Sep 16 05:04:26.292144 containerd[1553]: time="2025-09-16T05:04:26.292090323Z" level=info msg="connecting to shim 71a7a4214c26ae753d9bb05d16b5f778694a34c91b2261c2476f19c22a8b4325" address="unix:///run/containerd/s/868e52e65b398e0ca102ab36b7c4aac5b179c67ab5399b3cf5f669c717786148" protocol=ttrpc version=3 Sep 16 05:04:26.337061 systemd[1]: Started cri-containerd-71a7a4214c26ae753d9bb05d16b5f778694a34c91b2261c2476f19c22a8b4325.scope - libcontainer container 71a7a4214c26ae753d9bb05d16b5f778694a34c91b2261c2476f19c22a8b4325. Sep 16 05:04:26.429806 containerd[1553]: time="2025-09-16T05:04:26.429713069Z" level=info msg="StartContainer for \"be94bf3dbbdf956a81d12f70773ba999481644ebca5b2b3c68245b221e0ece74\" returns successfully" Sep 16 05:04:26.508166 containerd[1553]: time="2025-09-16T05:04:26.507170862Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79a73db95b8a77386abd30851f6d9e0e1495c07416ddabaa4bbd207409fb03ab\" id:\"316a89f0c3556d4fed453fed465d3a944b3887f00b05409dd54297a42d86ba33\" pid:5025 exited_at:{seconds:1757999066 nanos:504451816}" Sep 16 05:04:26.654292 containerd[1553]: time="2025-09-16T05:04:26.654242585Z" level=info msg="StartContainer for \"71a7a4214c26ae753d9bb05d16b5f778694a34c91b2261c2476f19c22a8b4325\" returns successfully" Sep 16 05:04:27.163721 kubelet[2798]: I0916 05:04:27.163603 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66968c6c76-2jqqw" podStartSLOduration=33.605277603 podStartE2EDuration="41.162546266s" podCreationTimestamp="2025-09-16 05:03:46 +0000 UTC" firstStartedPulling="2025-09-16 05:04:18.688217544 +0000 UTC m=+50.284853720" lastFinishedPulling="2025-09-16 05:04:26.245486191 +0000 UTC m=+57.842122383" observedRunningTime="2025-09-16 05:04:27.161004751 +0000 UTC m=+58.757640952" watchObservedRunningTime="2025-09-16 05:04:27.162546266 +0000 UTC m=+58.759182466" Sep 16 05:04:28.133582 kubelet[2798]: I0916 05:04:28.133534 2798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 05:04:29.480849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2839808072.mount: Deactivated successfully. Sep 16 05:04:30.662864 kubelet[2798]: I0916 05:04:30.662818 2798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 05:04:30.980803 containerd[1553]: time="2025-09-16T05:04:30.980250417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:30.984753 containerd[1553]: time="2025-09-16T05:04:30.984702326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 16 05:04:30.986196 containerd[1553]: time="2025-09-16T05:04:30.985806182Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:30.995350 containerd[1553]: time="2025-09-16T05:04:30.995303978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:30.999546 containerd[1553]: time="2025-09-16T05:04:30.999498945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.752597455s" Sep 16 05:04:31.000310 containerd[1553]: time="2025-09-16T05:04:30.999549681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 16 05:04:31.005489 containerd[1553]: time="2025-09-16T05:04:31.005434777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 16 05:04:31.011351 containerd[1553]: time="2025-09-16T05:04:31.011307546Z" level=info msg="CreateContainer within sandbox \"c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 16 05:04:31.027596 containerd[1553]: time="2025-09-16T05:04:31.026695681Z" level=info msg="Container 2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:31.051505 containerd[1553]: time="2025-09-16T05:04:31.050403810Z" level=info msg="CreateContainer within sandbox \"c36053aa5038a80464718544ce4c4845140541ae3013290eac7d3db88687501c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168\"" Sep 16 05:04:31.052327 containerd[1553]: time="2025-09-16T05:04:31.052290043Z" level=info msg="StartContainer for \"2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168\"" Sep 16 05:04:31.056197 containerd[1553]: time="2025-09-16T05:04:31.056109215Z" level=info msg="connecting to shim 2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168" address="unix:///run/containerd/s/3b3c51fb0a6f723d04a0f5ac5090a348e746169d066e8e1851c4c74ed75dacab" protocol=ttrpc version=3 Sep 16 05:04:31.123804 systemd[1]: Started cri-containerd-2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168.scope - libcontainer container 2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168. Sep 16 05:04:31.448395 containerd[1553]: time="2025-09-16T05:04:31.448345367Z" level=info msg="StartContainer for \"2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168\" returns successfully" Sep 16 05:04:32.759243 containerd[1553]: time="2025-09-16T05:04:32.757755311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:32.761098 containerd[1553]: time="2025-09-16T05:04:32.761022124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 16 05:04:32.762583 containerd[1553]: time="2025-09-16T05:04:32.762351035Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:32.767223 containerd[1553]: time="2025-09-16T05:04:32.767185548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:32.769865 containerd[1553]: time="2025-09-16T05:04:32.769791368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.764287034s" Sep 16 05:04:32.770334 containerd[1553]: time="2025-09-16T05:04:32.770005377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 16 05:04:32.776010 containerd[1553]: time="2025-09-16T05:04:32.775963138Z" level=info msg="CreateContainer within sandbox \"74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 16 05:04:32.790582 containerd[1553]: time="2025-09-16T05:04:32.787684888Z" level=info msg="Container 9b6adc9e3a8cf920123cbc58f0ff9a863025db6fb2de9f5e4be5bca919437f0f: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:32.808187 containerd[1553]: time="2025-09-16T05:04:32.808131133Z" level=info msg="CreateContainer within sandbox \"74285f1c3c86d84879ce4231b9998ccfb134c797f66cc8efb7f9af0bc17635d5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9b6adc9e3a8cf920123cbc58f0ff9a863025db6fb2de9f5e4be5bca919437f0f\"" Sep 16 05:04:32.811208 containerd[1553]: time="2025-09-16T05:04:32.811130221Z" level=info msg="StartContainer for \"9b6adc9e3a8cf920123cbc58f0ff9a863025db6fb2de9f5e4be5bca919437f0f\"" Sep 16 05:04:32.815907 containerd[1553]: time="2025-09-16T05:04:32.815863983Z" level=info msg="connecting to shim 9b6adc9e3a8cf920123cbc58f0ff9a863025db6fb2de9f5e4be5bca919437f0f" address="unix:///run/containerd/s/3636e11f6620878619d727073e1cab75c35cb401fcea3e47a8159abc930b8aab" protocol=ttrpc version=3 Sep 16 05:04:32.868811 systemd[1]: Started cri-containerd-9b6adc9e3a8cf920123cbc58f0ff9a863025db6fb2de9f5e4be5bca919437f0f.scope - libcontainer container 9b6adc9e3a8cf920123cbc58f0ff9a863025db6fb2de9f5e4be5bca919437f0f. Sep 16 05:04:33.067231 containerd[1553]: time="2025-09-16T05:04:33.067181493Z" level=info msg="StartContainer for \"9b6adc9e3a8cf920123cbc58f0ff9a863025db6fb2de9f5e4be5bca919437f0f\" returns successfully" Sep 16 05:04:33.092123 containerd[1553]: time="2025-09-16T05:04:33.091949695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168\" id:\"ddbbfcaecc34857c5f59fdd43fea9599453cb75e928d69819f44e0e0011c5278\" pid:5149 exit_status:1 exited_at:{seconds:1757999073 nanos:89419394}" Sep 16 05:04:33.211363 kubelet[2798]: I0916 05:04:33.211284 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xxzmm" podStartSLOduration=25.645668532 podStartE2EDuration="43.211259313s" podCreationTimestamp="2025-09-16 05:03:50 +0000 UTC" firstStartedPulling="2025-09-16 05:04:15.206410251 +0000 UTC m=+46.803046437" lastFinishedPulling="2025-09-16 05:04:32.772001038 +0000 UTC m=+64.368637218" observedRunningTime="2025-09-16 05:04:33.208472151 +0000 UTC m=+64.805108354" watchObservedRunningTime="2025-09-16 05:04:33.211259313 +0000 UTC m=+64.807895513" Sep 16 05:04:33.212020 kubelet[2798]: I0916 05:04:33.211877 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-pmjm8" podStartSLOduration=32.207837829 podStartE2EDuration="43.211833816s" podCreationTimestamp="2025-09-16 05:03:50 +0000 UTC" firstStartedPulling="2025-09-16 05:04:19.99876726 +0000 UTC m=+51.595403457" lastFinishedPulling="2025-09-16 05:04:31.002763268 +0000 UTC m=+62.599399444" observedRunningTime="2025-09-16 05:04:32.201475897 +0000 UTC m=+63.798112097" watchObservedRunningTime="2025-09-16 05:04:33.211833816 +0000 UTC m=+64.808470015" Sep 16 05:04:33.567189 containerd[1553]: time="2025-09-16T05:04:33.567130697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168\" id:\"8d5c45f626519e0e0e48cd9c3a688f3f6bc8e424f48c5cc05145b3ef5b8bb689\" pid:5208 exit_status:1 exited_at:{seconds:1757999073 nanos:566705721}" Sep 16 05:04:33.789550 kubelet[2798]: I0916 05:04:33.789511 2798 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 16 05:04:33.789752 kubelet[2798]: I0916 05:04:33.789594 2798 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 16 05:04:34.149328 kubelet[2798]: I0916 05:04:34.148998 2798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 05:04:34.676917 containerd[1553]: time="2025-09-16T05:04:34.676533689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168\" id:\"d89780d904f2b7b9642f0ecbc2517c1bd7851d6386504098d3244a37a3ac4eb2\" pid:5237 exit_status:1 exited_at:{seconds:1757999074 nanos:676083648}" Sep 16 05:04:36.778939 systemd[1]: Started sshd@9-10.128.0.94:22-139.178.68.195:56064.service - OpenSSH per-connection server daemon (139.178.68.195:56064). Sep 16 05:04:37.101120 sshd[5256]: Accepted publickey for core from 139.178.68.195 port 56064 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:37.106016 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:37.121326 systemd-logind[1537]: New session 10 of user core. Sep 16 05:04:37.128261 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 05:04:37.518199 sshd[5259]: Connection closed by 139.178.68.195 port 56064 Sep 16 05:04:37.519131 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:37.533671 systemd[1]: sshd@9-10.128.0.94:22-139.178.68.195:56064.service: Deactivated successfully. Sep 16 05:04:37.541729 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 05:04:37.544837 systemd-logind[1537]: Session 10 logged out. Waiting for processes to exit. Sep 16 05:04:37.552324 systemd-logind[1537]: Removed session 10. Sep 16 05:04:42.581961 systemd[1]: Started sshd@10-10.128.0.94:22-139.178.68.195:60780.service - OpenSSH per-connection server daemon (139.178.68.195:60780). Sep 16 05:04:42.926046 sshd[5276]: Accepted publickey for core from 139.178.68.195 port 60780 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:42.930730 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:42.942909 systemd-logind[1537]: New session 11 of user core. Sep 16 05:04:42.950761 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 05:04:43.356534 sshd[5279]: Connection closed by 139.178.68.195 port 60780 Sep 16 05:04:43.357890 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:43.369205 systemd[1]: sshd@10-10.128.0.94:22-139.178.68.195:60780.service: Deactivated successfully. Sep 16 05:04:43.375979 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 05:04:43.379645 systemd-logind[1537]: Session 11 logged out. Waiting for processes to exit. Sep 16 05:04:43.383508 systemd-logind[1537]: Removed session 11. Sep 16 05:04:45.650290 containerd[1553]: time="2025-09-16T05:04:45.650091840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c774f9dee76e0ebd649aea175c76b27f0271552acda54b2c2b08895a7e59aac0\" id:\"7046981110fab1326d1931307d27e33ea133f1dc9e72049b70ddc685112b5ce3\" pid:5305 exited_at:{seconds:1757999085 nanos:649094032}" Sep 16 05:04:48.414956 systemd[1]: Started sshd@11-10.128.0.94:22-139.178.68.195:60796.service - OpenSSH per-connection server daemon (139.178.68.195:60796). Sep 16 05:04:48.747198 sshd[5318]: Accepted publickey for core from 139.178.68.195 port 60796 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:48.749481 sshd-session[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:48.772585 systemd-logind[1537]: New session 12 of user core. Sep 16 05:04:48.774139 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 05:04:49.101570 sshd[5321]: Connection closed by 139.178.68.195 port 60796 Sep 16 05:04:49.102734 sshd-session[5318]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:49.111253 systemd-logind[1537]: Session 12 logged out. Waiting for processes to exit. Sep 16 05:04:49.112337 systemd[1]: sshd@11-10.128.0.94:22-139.178.68.195:60796.service: Deactivated successfully. Sep 16 05:04:49.118224 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 05:04:49.123665 systemd-logind[1537]: Removed session 12. Sep 16 05:04:49.160988 systemd[1]: Started sshd@12-10.128.0.94:22-139.178.68.195:60800.service - OpenSSH per-connection server daemon (139.178.68.195:60800). Sep 16 05:04:49.488782 sshd[5334]: Accepted publickey for core from 139.178.68.195 port 60800 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:49.490835 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:49.504638 systemd-logind[1537]: New session 13 of user core. Sep 16 05:04:49.509786 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 05:04:49.957592 sshd[5337]: Connection closed by 139.178.68.195 port 60800 Sep 16 05:04:49.958856 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:49.968620 systemd-logind[1537]: Session 13 logged out. Waiting for processes to exit. Sep 16 05:04:49.970208 systemd[1]: sshd@12-10.128.0.94:22-139.178.68.195:60800.service: Deactivated successfully. Sep 16 05:04:49.976877 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 05:04:49.983806 systemd-logind[1537]: Removed session 13. Sep 16 05:04:50.014413 systemd[1]: Started sshd@13-10.128.0.94:22-139.178.68.195:48490.service - OpenSSH per-connection server daemon (139.178.68.195:48490). Sep 16 05:04:50.352848 sshd[5347]: Accepted publickey for core from 139.178.68.195 port 48490 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:50.354281 sshd-session[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:50.365503 systemd-logind[1537]: New session 14 of user core. Sep 16 05:04:50.370541 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 05:04:50.717253 sshd[5350]: Connection closed by 139.178.68.195 port 48490 Sep 16 05:04:50.718143 sshd-session[5347]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:50.732124 systemd[1]: sshd@13-10.128.0.94:22-139.178.68.195:48490.service: Deactivated successfully. Sep 16 05:04:50.737176 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 05:04:50.739638 systemd-logind[1537]: Session 14 logged out. Waiting for processes to exit. Sep 16 05:04:50.743086 systemd-logind[1537]: Removed session 14. Sep 16 05:04:55.778881 systemd[1]: Started sshd@14-10.128.0.94:22-139.178.68.195:48494.service - OpenSSH per-connection server daemon (139.178.68.195:48494). Sep 16 05:04:56.128551 sshd[5369]: Accepted publickey for core from 139.178.68.195 port 48494 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:56.130941 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:56.146445 systemd-logind[1537]: New session 15 of user core. Sep 16 05:04:56.152201 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 05:04:56.224389 containerd[1553]: time="2025-09-16T05:04:56.223865738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79a73db95b8a77386abd30851f6d9e0e1495c07416ddabaa4bbd207409fb03ab\" id:\"8d6d8210469c962d0374fd209a63d98b29f1a0780b31c3348449ac63d9c0cdbf\" pid:5387 exited_at:{seconds:1757999096 nanos:223235459}" Sep 16 05:04:56.501698 sshd[5388]: Connection closed by 139.178.68.195 port 48494 Sep 16 05:04:56.502769 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:56.511778 systemd[1]: sshd@14-10.128.0.94:22-139.178.68.195:48494.service: Deactivated successfully. Sep 16 05:04:56.518725 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 05:04:56.521070 systemd-logind[1537]: Session 15 logged out. Waiting for processes to exit. Sep 16 05:04:56.525094 systemd-logind[1537]: Removed session 15. Sep 16 05:05:01.561743 systemd[1]: Started sshd@15-10.128.0.94:22-139.178.68.195:50358.service - OpenSSH per-connection server daemon (139.178.68.195:50358). Sep 16 05:05:01.882172 sshd[5411]: Accepted publickey for core from 139.178.68.195 port 50358 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:01.885211 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:01.894944 systemd-logind[1537]: New session 16 of user core. Sep 16 05:05:01.903036 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 05:05:02.275676 sshd[5414]: Connection closed by 139.178.68.195 port 50358 Sep 16 05:05:02.276150 sshd-session[5411]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:02.288767 systemd[1]: sshd@15-10.128.0.94:22-139.178.68.195:50358.service: Deactivated successfully. Sep 16 05:05:02.294648 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 05:05:02.300244 systemd-logind[1537]: Session 16 logged out. Waiting for processes to exit. Sep 16 05:05:02.304103 systemd-logind[1537]: Removed session 16. Sep 16 05:05:04.570365 containerd[1553]: time="2025-09-16T05:05:04.570282031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168\" id:\"698d72b2d9a3a5a5a167b0223bc9c5ba9d2f12567b65ce78c911906f99a33c68\" pid:5440 exited_at:{seconds:1757999104 nanos:568372644}" Sep 16 05:05:07.338119 systemd[1]: Started sshd@16-10.128.0.94:22-139.178.68.195:50364.service - OpenSSH per-connection server daemon (139.178.68.195:50364). Sep 16 05:05:07.677680 sshd[5455]: Accepted publickey for core from 139.178.68.195 port 50364 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:07.680800 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:07.689273 systemd-logind[1537]: New session 17 of user core. Sep 16 05:05:07.696843 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 05:05:08.079861 sshd[5458]: Connection closed by 139.178.68.195 port 50364 Sep 16 05:05:08.081652 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:08.094326 systemd[1]: sshd@16-10.128.0.94:22-139.178.68.195:50364.service: Deactivated successfully. Sep 16 05:05:08.095014 systemd-logind[1537]: Session 17 logged out. Waiting for processes to exit. Sep 16 05:05:08.103533 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 05:05:08.110392 systemd-logind[1537]: Removed session 17. Sep 16 05:05:08.690579 containerd[1553]: time="2025-09-16T05:05:08.690465838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79a73db95b8a77386abd30851f6d9e0e1495c07416ddabaa4bbd207409fb03ab\" id:\"0d3f017f8199129156fd8434b4d3643b9513738659f14f6e95cead26cc83092c\" pid:5483 exited_at:{seconds:1757999108 nanos:690142418}" Sep 16 05:05:11.214571 containerd[1553]: time="2025-09-16T05:05:11.214453968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168\" id:\"3b3b39058763a76d30958a5ce5c5fd1c22884e0237d84710f26a93c6e4bf807d\" pid:5503 exited_at:{seconds:1757999111 nanos:213754152}" Sep 16 05:05:13.138693 systemd[1]: Started sshd@17-10.128.0.94:22-139.178.68.195:56714.service - OpenSSH per-connection server daemon (139.178.68.195:56714). Sep 16 05:05:13.460314 sshd[5515]: Accepted publickey for core from 139.178.68.195 port 56714 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:13.461212 sshd-session[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:13.476962 systemd-logind[1537]: New session 18 of user core. Sep 16 05:05:13.483793 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 05:05:13.808583 sshd[5518]: Connection closed by 139.178.68.195 port 56714 Sep 16 05:05:13.809845 sshd-session[5515]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:13.819773 systemd[1]: sshd@17-10.128.0.94:22-139.178.68.195:56714.service: Deactivated successfully. Sep 16 05:05:13.820124 systemd-logind[1537]: Session 18 logged out. Waiting for processes to exit. Sep 16 05:05:13.825773 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 05:05:13.831821 systemd-logind[1537]: Removed session 18. Sep 16 05:05:13.867682 systemd[1]: Started sshd@18-10.128.0.94:22-139.178.68.195:56726.service - OpenSSH per-connection server daemon (139.178.68.195:56726). Sep 16 05:05:14.189661 sshd[5530]: Accepted publickey for core from 139.178.68.195 port 56726 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:14.192329 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:14.202631 systemd-logind[1537]: New session 19 of user core. Sep 16 05:05:14.207988 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 05:05:14.640000 sshd[5533]: Connection closed by 139.178.68.195 port 56726 Sep 16 05:05:14.643101 sshd-session[5530]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:14.664272 systemd[1]: sshd@18-10.128.0.94:22-139.178.68.195:56726.service: Deactivated successfully. Sep 16 05:05:14.672781 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 05:05:14.679814 systemd-logind[1537]: Session 19 logged out. Waiting for processes to exit. Sep 16 05:05:14.700765 systemd[1]: Started sshd@19-10.128.0.94:22-139.178.68.195:56734.service - OpenSSH per-connection server daemon (139.178.68.195:56734). Sep 16 05:05:14.705140 systemd-logind[1537]: Removed session 19. Sep 16 05:05:15.034805 sshd[5543]: Accepted publickey for core from 139.178.68.195 port 56734 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:15.037215 sshd-session[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:15.049451 systemd-logind[1537]: New session 20 of user core. Sep 16 05:05:15.061635 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 05:05:16.062585 containerd[1553]: time="2025-09-16T05:05:16.060926410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c774f9dee76e0ebd649aea175c76b27f0271552acda54b2c2b08895a7e59aac0\" id:\"54053f92f60413a8237095856c5db7058b4e82a7f56c07a7cda8aad0bed6721c\" pid:5565 exited_at:{seconds:1757999116 nanos:60422154}" Sep 16 05:05:16.312294 sshd[5546]: Connection closed by 139.178.68.195 port 56734 Sep 16 05:05:16.316821 sshd-session[5543]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:16.325321 systemd[1]: sshd@19-10.128.0.94:22-139.178.68.195:56734.service: Deactivated successfully. Sep 16 05:05:16.331929 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 05:05:16.336228 systemd-logind[1537]: Session 20 logged out. Waiting for processes to exit. Sep 16 05:05:16.343799 systemd-logind[1537]: Removed session 20. Sep 16 05:05:16.380319 systemd[1]: Started sshd@20-10.128.0.94:22-139.178.68.195:56744.service - OpenSSH per-connection server daemon (139.178.68.195:56744). Sep 16 05:05:16.716348 sshd[5585]: Accepted publickey for core from 139.178.68.195 port 56744 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:16.718526 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:16.734164 systemd-logind[1537]: New session 21 of user core. Sep 16 05:05:16.738710 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 05:05:17.309580 sshd[5590]: Connection closed by 139.178.68.195 port 56744 Sep 16 05:05:17.311817 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:17.322964 systemd[1]: sshd@20-10.128.0.94:22-139.178.68.195:56744.service: Deactivated successfully. Sep 16 05:05:17.329739 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 05:05:17.335478 systemd-logind[1537]: Session 21 logged out. Waiting for processes to exit. Sep 16 05:05:17.339461 systemd-logind[1537]: Removed session 21. Sep 16 05:05:17.369000 systemd[1]: Started sshd@21-10.128.0.94:22-139.178.68.195:56754.service - OpenSSH per-connection server daemon (139.178.68.195:56754). Sep 16 05:05:17.720961 sshd[5600]: Accepted publickey for core from 139.178.68.195 port 56754 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:17.724514 sshd-session[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:17.737533 systemd-logind[1537]: New session 22 of user core. Sep 16 05:05:17.744020 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 05:05:18.112546 sshd[5604]: Connection closed by 139.178.68.195 port 56754 Sep 16 05:05:18.115855 sshd-session[5600]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:18.127345 systemd-logind[1537]: Session 22 logged out. Waiting for processes to exit. Sep 16 05:05:18.129651 systemd[1]: sshd@21-10.128.0.94:22-139.178.68.195:56754.service: Deactivated successfully. Sep 16 05:05:18.135475 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 05:05:18.141983 systemd-logind[1537]: Removed session 22. Sep 16 05:05:23.174507 systemd[1]: Started sshd@22-10.128.0.94:22-139.178.68.195:43834.service - OpenSSH per-connection server daemon (139.178.68.195:43834). Sep 16 05:05:23.504414 sshd[5618]: Accepted publickey for core from 139.178.68.195 port 43834 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:23.505829 sshd-session[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:23.516763 systemd-logind[1537]: New session 23 of user core. Sep 16 05:05:23.525345 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 05:05:23.848093 sshd[5621]: Connection closed by 139.178.68.195 port 43834 Sep 16 05:05:23.850604 sshd-session[5618]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:23.860930 systemd-logind[1537]: Session 23 logged out. Waiting for processes to exit. Sep 16 05:05:23.864097 systemd[1]: sshd@22-10.128.0.94:22-139.178.68.195:43834.service: Deactivated successfully. Sep 16 05:05:23.869903 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 05:05:23.874719 systemd-logind[1537]: Removed session 23. Sep 16 05:05:26.188231 containerd[1553]: time="2025-09-16T05:05:26.188076155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79a73db95b8a77386abd30851f6d9e0e1495c07416ddabaa4bbd207409fb03ab\" id:\"5691c3507980712de6e957a5ebcf84717523a860b900bad373084c6390647c75\" pid:5644 exited_at:{seconds:1757999126 nanos:187615935}" Sep 16 05:05:28.907699 systemd[1]: Started sshd@23-10.128.0.94:22-139.178.68.195:43846.service - OpenSSH per-connection server daemon (139.178.68.195:43846). Sep 16 05:05:29.233116 sshd[5657]: Accepted publickey for core from 139.178.68.195 port 43846 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:29.236637 sshd-session[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:29.246806 systemd-logind[1537]: New session 24 of user core. Sep 16 05:05:29.256651 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 05:05:29.585653 sshd[5660]: Connection closed by 139.178.68.195 port 43846 Sep 16 05:05:29.587843 sshd-session[5657]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:29.596869 systemd[1]: sshd@23-10.128.0.94:22-139.178.68.195:43846.service: Deactivated successfully. Sep 16 05:05:29.603036 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 05:05:29.605626 systemd-logind[1537]: Session 24 logged out. Waiting for processes to exit. Sep 16 05:05:29.608806 systemd-logind[1537]: Removed session 24. Sep 16 05:05:34.310727 containerd[1553]: time="2025-09-16T05:05:34.310658928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2639d81c90ee8974b2557a12a09668a7baecf2c3ff7e21bd8ba5cf6d649f4168\" id:\"6ea24fe8121064e3204374e8ff9ad415d2ea75ed6fb9c93264fdb7ec6d2f5e82\" pid:5690 exited_at:{seconds:1757999134 nanos:310164789}" Sep 16 05:05:34.645370 systemd[1]: Started sshd@24-10.128.0.94:22-139.178.68.195:39500.service - OpenSSH per-connection server daemon (139.178.68.195:39500). Sep 16 05:05:34.988304 sshd[5702]: Accepted publickey for core from 139.178.68.195 port 39500 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:34.991421 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:34.999626 systemd-logind[1537]: New session 25 of user core. Sep 16 05:05:35.007874 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 05:05:35.326806 sshd[5705]: Connection closed by 139.178.68.195 port 39500 Sep 16 05:05:35.328020 sshd-session[5702]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:35.337161 systemd-logind[1537]: Session 25 logged out. Waiting for processes to exit. Sep 16 05:05:35.338914 systemd[1]: sshd@24-10.128.0.94:22-139.178.68.195:39500.service: Deactivated successfully. Sep 16 05:05:35.344873 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 05:05:35.351747 systemd-logind[1537]: Removed session 25.