Sep 4 00:05:58.171896 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 3 22:05:39 -00 2025 Sep 4 00:05:58.171950 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:05:58.171968 kernel: BIOS-provided physical RAM map: Sep 4 00:05:58.171982 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 4 00:05:58.171996 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 4 00:05:58.172010 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 4 00:05:58.172031 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 4 00:05:58.172046 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 4 00:05:58.172061 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd329fff] usable Sep 4 00:05:58.172074 kernel: BIOS-e820: [mem 0x00000000bd32a000-0x00000000bd331fff] ACPI data Sep 4 00:05:58.172089 kernel: BIOS-e820: [mem 0x00000000bd332000-0x00000000bf8ecfff] usable Sep 4 00:05:58.172103 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 4 00:05:58.172118 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 4 00:05:58.172132 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 4 00:05:58.172154 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 4 00:05:58.172170 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 4 00:05:58.172185 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 4 00:05:58.172201 kernel: NX (Execute Disable) protection: active Sep 4 00:05:58.172216 kernel: APIC: Static calls initialized Sep 4 00:05:58.172232 kernel: efi: EFI v2.7 by EDK II Sep 4 00:05:58.172248 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32a018 Sep 4 00:05:58.172268 kernel: random: crng init done Sep 4 00:05:58.172284 kernel: secureboot: Secure boot disabled Sep 4 00:05:58.172300 kernel: SMBIOS 2.4 present. Sep 4 00:05:58.172317 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 4 00:05:58.172332 kernel: DMI: Memory slots populated: 1/1 Sep 4 00:05:58.172346 kernel: Hypervisor detected: KVM Sep 4 00:05:58.172361 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 00:05:58.172377 kernel: kvm-clock: using sched offset of 14957528924 cycles Sep 4 00:05:58.172394 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 00:05:58.172410 kernel: tsc: Detected 2299.998 MHz processor Sep 4 00:05:58.172425 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 00:05:58.172446 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 00:05:58.172461 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 4 00:05:58.172478 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 4 00:05:58.172492 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 00:05:58.172508 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 4 00:05:58.172524 kernel: Using GB pages for direct mapping Sep 4 00:05:58.172541 kernel: ACPI: Early table checksum verification disabled Sep 4 00:05:58.172558 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 4 00:05:58.172584 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 4 00:05:58.172601 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 4 00:05:58.172618 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 4 00:05:58.172634 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 4 00:05:58.172652 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 4 00:05:58.172668 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 4 00:05:58.172689 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 4 00:05:58.172707 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 4 00:05:58.172724 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 4 00:05:58.172756 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 4 00:05:58.172774 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 4 00:05:58.172792 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 4 00:05:58.172809 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 4 00:05:58.172833 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 4 00:05:58.172849 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 4 00:05:58.172869 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 4 00:05:58.172887 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 4 00:05:58.172904 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 4 00:05:58.172921 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 4 00:05:58.172937 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 4 00:05:58.172954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 4 00:05:58.172971 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 4 00:05:58.172989 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Sep 4 00:05:58.173007 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Sep 4 00:05:58.173028 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Sep 4 00:05:58.173044 kernel: Zone ranges: Sep 4 00:05:58.173060 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 00:05:58.173077 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 00:05:58.173095 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 4 00:05:58.173112 kernel: Device empty Sep 4 00:05:58.173129 kernel: Movable zone start for each node Sep 4 00:05:58.173147 kernel: Early memory node ranges Sep 4 00:05:58.173164 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 4 00:05:58.173185 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 4 00:05:58.173202 kernel: node 0: [mem 0x0000000000100000-0x00000000bd329fff] Sep 4 00:05:58.173219 kernel: node 0: [mem 0x00000000bd332000-0x00000000bf8ecfff] Sep 4 00:05:58.173236 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 4 00:05:58.173253 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 4 00:05:58.173270 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 4 00:05:58.173287 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 00:05:58.173304 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 4 00:05:58.173322 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 4 00:05:58.173342 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Sep 4 00:05:58.173360 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 00:05:58.173377 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 4 00:05:58.173394 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 00:05:58.173412 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 00:05:58.173429 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 00:05:58.173446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 00:05:58.173463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 00:05:58.173481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 00:05:58.173502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 00:05:58.173519 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 00:05:58.173536 kernel: CPU topo: Max. logical packages: 1 Sep 4 00:05:58.173553 kernel: CPU topo: Max. logical dies: 1 Sep 4 00:05:58.173571 kernel: CPU topo: Max. dies per package: 1 Sep 4 00:05:58.173588 kernel: CPU topo: Max. threads per core: 2 Sep 4 00:05:58.173605 kernel: CPU topo: Num. cores per package: 1 Sep 4 00:05:58.173623 kernel: CPU topo: Num. threads per package: 2 Sep 4 00:05:58.173640 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 4 00:05:58.173657 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 4 00:05:58.173678 kernel: Booting paravirtualized kernel on KVM Sep 4 00:05:58.173696 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 00:05:58.173714 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 00:05:58.173744 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 4 00:05:58.173771 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 4 00:05:58.173785 kernel: pcpu-alloc: [0] 0 1 Sep 4 00:05:58.173800 kernel: kvm-guest: PV spinlocks enabled Sep 4 00:05:58.173814 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 00:05:58.173844 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:05:58.173858 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 00:05:58.173873 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 00:05:58.173887 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 00:05:58.173902 kernel: Fallback order for Node 0: 0 Sep 4 00:05:58.173916 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Sep 4 00:05:58.173930 kernel: Policy zone: Normal Sep 4 00:05:58.173945 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 00:05:58.173960 kernel: software IO TLB: area num 2. Sep 4 00:05:58.173993 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 00:05:58.174009 kernel: Kernel/User page tables isolation: enabled Sep 4 00:05:58.174029 kernel: ftrace: allocating 40099 entries in 157 pages Sep 4 00:05:58.174047 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 00:05:58.174063 kernel: Dynamic Preempt: voluntary Sep 4 00:05:58.174081 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 00:05:58.174103 kernel: rcu: RCU event tracing is enabled. Sep 4 00:05:58.174120 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 00:05:58.174138 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 00:05:58.174160 kernel: Rude variant of Tasks RCU enabled. Sep 4 00:05:58.174178 kernel: Tracing variant of Tasks RCU enabled. Sep 4 00:05:58.174194 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 00:05:58.174212 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 00:05:58.174231 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:05:58.174250 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:05:58.174268 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 00:05:58.174291 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 00:05:58.174309 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 00:05:58.174327 kernel: Console: colour dummy device 80x25 Sep 4 00:05:58.174346 kernel: printk: legacy console [ttyS0] enabled Sep 4 00:05:58.174364 kernel: ACPI: Core revision 20240827 Sep 4 00:05:58.174383 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 00:05:58.174401 kernel: x2apic enabled Sep 4 00:05:58.174420 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 00:05:58.174438 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 4 00:05:58.174460 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 4 00:05:58.174479 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 4 00:05:58.174498 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 4 00:05:58.174517 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 4 00:05:58.174535 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 00:05:58.174554 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 4 00:05:58.174573 kernel: Spectre V2 : Mitigation: IBRS Sep 4 00:05:58.174591 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 00:05:58.174610 kernel: RETBleed: Mitigation: IBRS Sep 4 00:05:58.174632 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 00:05:58.174650 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 4 00:05:58.174669 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 00:05:58.174688 kernel: MDS: Mitigation: Clear CPU buffers Sep 4 00:05:58.174706 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 00:05:58.174725 kernel: active return thunk: its_return_thunk Sep 4 00:05:58.175649 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 4 00:05:58.175671 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 00:05:58.175689 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 00:05:58.175713 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 00:05:58.175746 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 00:05:58.175765 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 4 00:05:58.175783 kernel: Freeing SMP alternatives memory: 32K Sep 4 00:05:58.176788 kernel: pid_max: default: 32768 minimum: 301 Sep 4 00:05:58.176809 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 00:05:58.176837 kernel: landlock: Up and running. Sep 4 00:05:58.176854 kernel: SELinux: Initializing. Sep 4 00:05:58.176871 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 00:05:58.176896 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 00:05:58.176914 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 4 00:05:58.176932 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 4 00:05:58.176950 kernel: signal: max sigframe size: 1776 Sep 4 00:05:58.176968 kernel: rcu: Hierarchical SRCU implementation. Sep 4 00:05:58.176986 kernel: rcu: Max phase no-delay instances is 400. Sep 4 00:05:58.177006 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 00:05:58.177024 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 00:05:58.177043 kernel: smp: Bringing up secondary CPUs ... Sep 4 00:05:58.177066 kernel: smpboot: x86: Booting SMP configuration: Sep 4 00:05:58.177082 kernel: .... node #0, CPUs: #1 Sep 4 00:05:58.177100 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 00:05:58.177119 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 00:05:58.177137 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 00:05:58.177155 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 4 00:05:58.177174 kernel: Memory: 7566072K/7860552K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 288656K reserved, 0K cma-reserved) Sep 4 00:05:58.177192 kernel: devtmpfs: initialized Sep 4 00:05:58.177214 kernel: x86/mm: Memory block size: 128MB Sep 4 00:05:58.177232 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 4 00:05:58.177249 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 00:05:58.177268 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 00:05:58.177287 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 00:05:58.177305 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 00:05:58.177324 kernel: audit: initializing netlink subsys (disabled) Sep 4 00:05:58.177342 kernel: audit: type=2000 audit(1756944353.826:1): state=initialized audit_enabled=0 res=1 Sep 4 00:05:58.177360 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 00:05:58.177382 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 00:05:58.177400 kernel: cpuidle: using governor menu Sep 4 00:05:58.177417 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 00:05:58.177436 kernel: dca service started, version 1.12.1 Sep 4 00:05:58.177454 kernel: PCI: Using configuration type 1 for base access Sep 4 00:05:58.177473 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 00:05:58.177491 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 00:05:58.177510 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 00:05:58.177528 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 00:05:58.177552 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 00:05:58.177570 kernel: ACPI: Added _OSI(Module Device) Sep 4 00:05:58.177587 kernel: ACPI: Added _OSI(Processor Device) Sep 4 00:05:58.177606 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 00:05:58.177623 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 00:05:58.177640 kernel: ACPI: Interpreter enabled Sep 4 00:05:58.177658 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 00:05:58.177676 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 00:05:58.177694 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 00:05:58.177717 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 00:05:58.178765 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 4 00:05:58.178800 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 00:05:58.179100 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 00:05:58.179294 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 00:05:58.179477 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 00:05:58.179499 kernel: PCI host bridge to bus 0000:00 Sep 4 00:05:58.179728 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 00:05:58.182083 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 00:05:58.182256 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 00:05:58.182425 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 4 00:05:58.182585 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 00:05:58.182805 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 4 00:05:58.183008 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 4 00:05:58.183209 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 4 00:05:58.183388 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 00:05:58.183575 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Sep 4 00:05:58.185817 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Sep 4 00:05:58.186050 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Sep 4 00:05:58.186245 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 00:05:58.186433 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Sep 4 00:05:58.186608 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Sep 4 00:05:58.186877 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 00:05:58.187067 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Sep 4 00:05:58.187251 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Sep 4 00:05:58.187275 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 00:05:58.187295 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 00:05:58.187321 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 00:05:58.187340 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 00:05:58.187359 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 00:05:58.187378 kernel: iommu: Default domain type: Translated Sep 4 00:05:58.187397 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 00:05:58.187416 kernel: efivars: Registered efivars operations Sep 4 00:05:58.187434 kernel: PCI: Using ACPI for IRQ routing Sep 4 00:05:58.187453 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 00:05:58.187470 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 4 00:05:58.187500 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 4 00:05:58.187527 kernel: e820: reserve RAM buffer [mem 0xbd32a000-0xbfffffff] Sep 4 00:05:58.187557 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 4 00:05:58.187588 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 4 00:05:58.187606 kernel: vgaarb: loaded Sep 4 00:05:58.187624 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 00:05:58.187643 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 00:05:58.187662 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 00:05:58.187681 kernel: pnp: PnP ACPI init Sep 4 00:05:58.187703 kernel: pnp: PnP ACPI: found 7 devices Sep 4 00:05:58.187722 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 00:05:58.189777 kernel: NET: Registered PF_INET protocol family Sep 4 00:05:58.189804 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 00:05:58.189830 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 00:05:58.189850 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 00:05:58.189870 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 00:05:58.189893 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 00:05:58.189918 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 00:05:58.189937 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 00:05:58.189958 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 00:05:58.189977 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 00:05:58.189996 kernel: NET: Registered PF_XDP protocol family Sep 4 00:05:58.190204 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 00:05:58.190374 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 00:05:58.190538 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 00:05:58.190721 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 4 00:05:58.193036 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 00:05:58.193065 kernel: PCI: CLS 0 bytes, default 64 Sep 4 00:05:58.193100 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 00:05:58.193116 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 4 00:05:58.193133 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 00:05:58.193150 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 4 00:05:58.193167 kernel: clocksource: Switched to clocksource tsc Sep 4 00:05:58.193191 kernel: Initialise system trusted keyrings Sep 4 00:05:58.193207 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 00:05:58.193223 kernel: Key type asymmetric registered Sep 4 00:05:58.193241 kernel: Asymmetric key parser 'x509' registered Sep 4 00:05:58.193258 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 00:05:58.193276 kernel: io scheduler mq-deadline registered Sep 4 00:05:58.193294 kernel: io scheduler kyber registered Sep 4 00:05:58.193310 kernel: io scheduler bfq registered Sep 4 00:05:58.193327 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 00:05:58.193347 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 00:05:58.193571 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 4 00:05:58.193597 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 00:05:58.193829 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 4 00:05:58.193855 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 00:05:58.194043 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 4 00:05:58.194066 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 00:05:58.194084 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 00:05:58.194102 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 00:05:58.194125 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 4 00:05:58.194143 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 4 00:05:58.194335 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 4 00:05:58.194359 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 00:05:58.194377 kernel: i8042: Warning: Keylock active Sep 4 00:05:58.194394 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 00:05:58.194412 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 00:05:58.194586 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 00:05:58.194776 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 00:05:58.194956 kernel: rtc_cmos 00:00: setting system clock to 2025-09-04T00:05:57 UTC (1756944357) Sep 4 00:05:58.195146 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 00:05:58.195169 kernel: intel_pstate: CPU model not supported Sep 4 00:05:58.195204 kernel: pstore: Using crash dump compression: deflate Sep 4 00:05:58.195223 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 00:05:58.195239 kernel: NET: Registered PF_INET6 protocol family Sep 4 00:05:58.195256 kernel: Segment Routing with IPv6 Sep 4 00:05:58.195281 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 00:05:58.195297 kernel: NET: Registered PF_PACKET protocol family Sep 4 00:05:58.195313 kernel: Key type dns_resolver registered Sep 4 00:05:58.195336 kernel: IPI shorthand broadcast: enabled Sep 4 00:05:58.195353 kernel: sched_clock: Marking stable (3627004231, 157188535)->(3863452577, -79259811) Sep 4 00:05:58.195370 kernel: registered taskstats version 1 Sep 4 00:05:58.195386 kernel: Loading compiled-in X.509 certificates Sep 4 00:05:58.195405 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 247a8159a15e16f8eb89737aa66cd9cf9bbb3c10' Sep 4 00:05:58.195423 kernel: Demotion targets for Node 0: null Sep 4 00:05:58.195446 kernel: Key type .fscrypt registered Sep 4 00:05:58.195464 kernel: Key type fscrypt-provisioning registered Sep 4 00:05:58.195483 kernel: ima: Allocated hash algorithm: sha1 Sep 4 00:05:58.195502 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 4 00:05:58.195521 kernel: ima: No architecture policies found Sep 4 00:05:58.195539 kernel: clk: Disabling unused clocks Sep 4 00:05:58.195557 kernel: Warning: unable to open an initial console. Sep 4 00:05:58.195585 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 4 00:05:58.195604 kernel: Write protecting the kernel read-only data: 24576k Sep 4 00:05:58.195626 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 4 00:05:58.195643 kernel: Run /init as init process Sep 4 00:05:58.195661 kernel: with arguments: Sep 4 00:05:58.195679 kernel: /init Sep 4 00:05:58.195696 kernel: with environment: Sep 4 00:05:58.195713 kernel: HOME=/ Sep 4 00:05:58.196801 kernel: TERM=linux Sep 4 00:05:58.196849 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 00:05:58.196872 systemd[1]: Successfully made /usr/ read-only. Sep 4 00:05:58.196902 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:05:58.196922 systemd[1]: Detected virtualization google. Sep 4 00:05:58.196939 systemd[1]: Detected architecture x86-64. Sep 4 00:05:58.196957 systemd[1]: Running in initrd. Sep 4 00:05:58.196975 systemd[1]: No hostname configured, using default hostname. Sep 4 00:05:58.196993 systemd[1]: Hostname set to . Sep 4 00:05:58.197015 systemd[1]: Initializing machine ID from random generator. Sep 4 00:05:58.197034 systemd[1]: Queued start job for default target initrd.target. Sep 4 00:05:58.197074 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:05:58.197097 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:05:58.197117 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 00:05:58.197135 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:05:58.197154 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 00:05:58.197177 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 00:05:58.197198 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 00:05:58.197228 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 00:05:58.197245 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:05:58.197264 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:05:58.197283 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:05:58.197311 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:05:58.197328 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:05:58.197346 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:05:58.197365 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:05:58.197385 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:05:58.197405 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 00:05:58.197425 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 00:05:58.197444 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:05:58.197464 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:05:58.197489 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:05:58.197509 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:05:58.197529 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 00:05:58.197549 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:05:58.197570 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 00:05:58.197590 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 00:05:58.197610 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 00:05:58.197631 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:05:58.197654 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:05:58.197673 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:05:58.197693 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 00:05:58.198093 systemd-journald[206]: Collecting audit messages is disabled. Sep 4 00:05:58.198151 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:05:58.198173 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 00:05:58.198194 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:05:58.198214 systemd-journald[206]: Journal started Sep 4 00:05:58.198260 systemd-journald[206]: Runtime Journal (/run/log/journal/c0fed0ddb22643cb9e7f28c53075bff6) is 8M, max 148.9M, 140.9M free. Sep 4 00:05:58.193130 systemd-modules-load[208]: Inserted module 'overlay' Sep 4 00:05:58.200763 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:05:58.210942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:05:58.238381 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 00:05:58.242842 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:05:58.250511 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:05:58.263208 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 00:05:58.263248 kernel: Bridge firewalling registered Sep 4 00:05:58.260603 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 4 00:05:58.263051 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:05:58.268243 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:05:58.279320 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:05:58.289312 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 00:05:58.297240 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:05:58.302216 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:05:58.318345 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:05:58.321631 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:05:58.331829 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:05:58.343937 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 00:05:58.378697 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 4 00:05:58.402250 systemd-resolved[241]: Positive Trust Anchors: Sep 4 00:05:58.402822 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:05:58.402895 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:05:58.409315 systemd-resolved[241]: Defaulting to hostname 'linux'. Sep 4 00:05:58.413114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:05:58.424981 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:05:58.499784 kernel: SCSI subsystem initialized Sep 4 00:05:58.511761 kernel: Loading iSCSI transport class v2.0-870. Sep 4 00:05:58.524799 kernel: iscsi: registered transport (tcp) Sep 4 00:05:58.550215 kernel: iscsi: registered transport (qla4xxx) Sep 4 00:05:58.550301 kernel: QLogic iSCSI HBA Driver Sep 4 00:05:58.573823 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:05:58.593384 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:05:58.600828 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:05:58.661257 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 00:05:58.663648 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 00:05:58.721771 kernel: raid6: avx2x4 gen() 17860 MB/s Sep 4 00:05:58.738770 kernel: raid6: avx2x2 gen() 17860 MB/s Sep 4 00:05:58.756381 kernel: raid6: avx2x1 gen() 13965 MB/s Sep 4 00:05:58.756443 kernel: raid6: using algorithm avx2x4 gen() 17860 MB/s Sep 4 00:05:58.774247 kernel: raid6: .... xor() 7777 MB/s, rmw enabled Sep 4 00:05:58.774320 kernel: raid6: using avx2x2 recovery algorithm Sep 4 00:05:58.797776 kernel: xor: automatically using best checksumming function avx Sep 4 00:05:58.985773 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 00:05:58.994668 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:05:58.998053 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:05:59.033022 systemd-udevd[456]: Using default interface naming scheme 'v255'. Sep 4 00:05:59.042459 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:05:59.047918 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 00:05:59.086755 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Sep 4 00:05:59.121086 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:05:59.123297 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:05:59.220681 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:05:59.226134 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 00:05:59.347769 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 00:05:59.347884 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Sep 4 00:05:59.393773 kernel: AES CTR mode by8 optimization enabled Sep 4 00:05:59.426052 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 00:05:59.442784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:05:59.452587 kernel: scsi host0: Virtio SCSI HBA Sep 4 00:05:59.443051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:05:59.448591 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:05:59.467325 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 4 00:05:59.467652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:05:59.476076 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:05:59.545759 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 4 00:05:59.546149 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 4 00:05:59.547759 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 4 00:05:59.548072 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 4 00:05:59.550842 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 4 00:05:59.566303 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 00:05:59.566399 kernel: GPT:17805311 != 25165823 Sep 4 00:05:59.566424 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 00:05:59.566447 kernel: GPT:17805311 != 25165823 Sep 4 00:05:59.566757 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 00:05:59.566784 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 00:05:59.568997 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 4 00:05:59.571717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:05:59.677452 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 4 00:05:59.678525 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 00:05:59.697191 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 4 00:05:59.711496 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 4 00:05:59.726965 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 4 00:05:59.727293 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 4 00:05:59.732408 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:05:59.737307 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:05:59.742168 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:05:59.749635 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 00:05:59.763960 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 00:05:59.776011 disk-uuid[607]: Primary Header is updated. Sep 4 00:05:59.776011 disk-uuid[607]: Secondary Entries is updated. Sep 4 00:05:59.776011 disk-uuid[607]: Secondary Header is updated. Sep 4 00:05:59.794751 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 00:05:59.795925 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:05:59.818787 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 00:06:00.837874 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 00:06:00.837976 disk-uuid[608]: The operation has completed successfully. Sep 4 00:06:00.914223 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 00:06:00.914405 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 00:06:00.971160 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 00:06:00.999305 sh[629]: Success Sep 4 00:06:01.023256 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 00:06:01.023788 kernel: device-mapper: uevent: version 1.0.3 Sep 4 00:06:01.023850 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 00:06:01.036779 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 4 00:06:01.130390 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 00:06:01.135859 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 00:06:01.153256 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 00:06:01.177775 kernel: BTRFS: device fsid 8a9c2e34-3d3c-49a9-acce-59bf90003071 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (641) Sep 4 00:06:01.181141 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9c2e34-3d3c-49a9-acce-59bf90003071 Sep 4 00:06:01.181214 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:06:01.206845 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 00:06:01.206958 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 00:06:01.206984 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 00:06:01.212725 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 00:06:01.214026 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:06:01.217375 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 00:06:01.219873 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 00:06:01.226649 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 00:06:01.265767 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (670) Sep 4 00:06:01.269180 kernel: BTRFS info (device sda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:06:01.269258 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:06:01.276556 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 00:06:01.276647 kernel: BTRFS info (device sda6): turning on async discard Sep 4 00:06:01.276672 kernel: BTRFS info (device sda6): enabling free space tree Sep 4 00:06:01.283983 kernel: BTRFS info (device sda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:06:01.284665 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 00:06:01.290483 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 00:06:01.430003 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:06:01.433938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:06:01.541810 systemd-networkd[810]: lo: Link UP Sep 4 00:06:01.542346 systemd-networkd[810]: lo: Gained carrier Sep 4 00:06:01.546323 systemd-networkd[810]: Enumeration completed Sep 4 00:06:01.546501 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:06:01.556664 ignition[727]: Ignition 2.21.0 Sep 4 00:06:01.546913 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:06:01.556672 ignition[727]: Stage: fetch-offline Sep 4 00:06:01.546921 systemd-networkd[810]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:06:01.556707 ignition[727]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:06:01.552569 systemd-networkd[810]: eth0: Link UP Sep 4 00:06:01.556718 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:06:01.552831 systemd-networkd[810]: eth0: Gained carrier Sep 4 00:06:01.557069 ignition[727]: parsed url from cmdline: "" Sep 4 00:06:01.552862 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:06:01.557074 ignition[727]: no config URL provided Sep 4 00:06:01.552966 systemd[1]: Reached target network.target - Network. Sep 4 00:06:01.557082 ignition[727]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:06:01.559620 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:06:01.557094 ignition[727]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:06:01.564846 systemd-networkd[810]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532.c.flatcar-212911.internal' to 'ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532' Sep 4 00:06:01.557105 ignition[727]: failed to fetch config: resource requires networking Sep 4 00:06:01.564862 systemd-networkd[810]: eth0: DHCPv4 address 10.128.0.81/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 4 00:06:01.557363 ignition[727]: Ignition finished successfully Sep 4 00:06:01.569269 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 00:06:01.611486 ignition[819]: Ignition 2.21.0 Sep 4 00:06:01.627194 unknown[819]: fetched base config from "system" Sep 4 00:06:01.611495 ignition[819]: Stage: fetch Sep 4 00:06:01.627205 unknown[819]: fetched base config from "system" Sep 4 00:06:01.611653 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:06:01.627215 unknown[819]: fetched user config from "gcp" Sep 4 00:06:01.611663 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:06:01.631298 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 00:06:01.611796 ignition[819]: parsed url from cmdline: "" Sep 4 00:06:01.637485 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 00:06:01.611802 ignition[819]: no config URL provided Sep 4 00:06:01.611812 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 00:06:01.611826 ignition[819]: no config at "/usr/lib/ignition/user.ign" Sep 4 00:06:01.611885 ignition[819]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 4 00:06:01.618178 ignition[819]: GET result: OK Sep 4 00:06:01.618385 ignition[819]: parsing config with SHA512: 78a9d06a27839b844301bedb97712cdbfde81d4f28ae1ed7604ca288e5eb342611c1e9f0141003faf0aa3e9a0e7b57b7ea1db9202a39e77d847734bce7d21158 Sep 4 00:06:01.627684 ignition[819]: fetch: fetch complete Sep 4 00:06:01.627691 ignition[819]: fetch: fetch passed Sep 4 00:06:01.628021 ignition[819]: Ignition finished successfully Sep 4 00:06:01.687063 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 00:06:01.683961 ignition[826]: Ignition 2.21.0 Sep 4 00:06:01.691094 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 00:06:01.683969 ignition[826]: Stage: kargs Sep 4 00:06:01.684156 ignition[826]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:06:01.684167 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:06:01.685250 ignition[826]: kargs: kargs passed Sep 4 00:06:01.685305 ignition[826]: Ignition finished successfully Sep 4 00:06:01.739304 ignition[832]: Ignition 2.21.0 Sep 4 00:06:01.739323 ignition[832]: Stage: disks Sep 4 00:06:01.739593 ignition[832]: no configs at "/usr/lib/ignition/base.d" Sep 4 00:06:01.742941 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 00:06:01.739612 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:06:01.747233 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 00:06:01.740963 ignition[832]: disks: disks passed Sep 4 00:06:01.750094 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 00:06:01.741023 ignition[832]: Ignition finished successfully Sep 4 00:06:01.754109 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:06:01.759106 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:06:01.763144 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:06:01.769289 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 00:06:01.811956 systemd-fsck[841]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 4 00:06:01.823526 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 00:06:01.830537 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 00:06:02.016784 kernel: EXT4-fs (sda9): mounted filesystem c3518c93-f823-4477-a620-ff9666a59be5 r/w with ordered data mode. Quota mode: none. Sep 4 00:06:02.017681 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 00:06:02.023069 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 00:06:02.032874 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:06:02.036969 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 00:06:02.044497 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 00:06:02.044603 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 00:06:02.044651 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:06:02.059868 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 00:06:02.067393 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 00:06:02.076789 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (849) Sep 4 00:06:02.080601 kernel: BTRFS info (device sda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:06:02.080672 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:06:02.089188 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 00:06:02.089276 kernel: BTRFS info (device sda6): turning on async discard Sep 4 00:06:02.089302 kernel: BTRFS info (device sda6): enabling free space tree Sep 4 00:06:02.092351 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:06:02.181397 initrd-setup-root[873]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 00:06:02.191000 initrd-setup-root[880]: cut: /sysroot/etc/group: No such file or directory Sep 4 00:06:02.199290 initrd-setup-root[887]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 00:06:02.207046 initrd-setup-root[894]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 00:06:02.373541 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 00:06:02.382032 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 00:06:02.387961 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 00:06:02.417169 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 00:06:02.419577 kernel: BTRFS info (device sda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:06:02.460170 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 00:06:02.463658 ignition[961]: INFO : Ignition 2.21.0 Sep 4 00:06:02.463658 ignition[961]: INFO : Stage: mount Sep 4 00:06:02.468895 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:06:02.468895 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:06:02.468895 ignition[961]: INFO : mount: mount passed Sep 4 00:06:02.468895 ignition[961]: INFO : Ignition finished successfully Sep 4 00:06:02.471159 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 00:06:02.476488 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 00:06:02.501069 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 00:06:02.536778 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (974) Sep 4 00:06:02.540678 kernel: BTRFS info (device sda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 4 00:06:02.540811 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 00:06:02.547329 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 00:06:02.547450 kernel: BTRFS info (device sda6): turning on async discard Sep 4 00:06:02.547496 kernel: BTRFS info (device sda6): enabling free space tree Sep 4 00:06:02.551241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 00:06:02.589126 ignition[991]: INFO : Ignition 2.21.0 Sep 4 00:06:02.589126 ignition[991]: INFO : Stage: files Sep 4 00:06:02.596199 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:06:02.596199 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:06:02.596199 ignition[991]: DEBUG : files: compiled without relabeling support, skipping Sep 4 00:06:02.596199 ignition[991]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 00:06:02.596199 ignition[991]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 00:06:02.611053 ignition[991]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 00:06:02.611053 ignition[991]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 00:06:02.611053 ignition[991]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 00:06:02.611053 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 00:06:02.611053 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 00:06:02.603245 unknown[991]: wrote ssh authorized keys file for user: core Sep 4 00:06:02.741366 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 00:06:02.842979 systemd-networkd[810]: eth0: Gained IPv6LL Sep 4 00:06:03.397056 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 00:06:03.397056 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 00:06:03.405938 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 00:06:03.602239 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 00:06:03.779920 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 00:06:03.779920 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:06:03.788916 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 00:06:04.123849 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 00:06:05.071764 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 00:06:05.071764 ignition[991]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 00:06:05.081937 ignition[991]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:06:05.081937 ignition[991]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 00:06:05.081937 ignition[991]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 00:06:05.081937 ignition[991]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 00:06:05.081937 ignition[991]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 00:06:05.081937 ignition[991]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:06:05.081937 ignition[991]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 00:06:05.081937 ignition[991]: INFO : files: files passed Sep 4 00:06:05.081937 ignition[991]: INFO : Ignition finished successfully Sep 4 00:06:05.083084 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 00:06:05.088019 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 00:06:05.095804 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 00:06:05.138932 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:06:05.138932 initrd-setup-root-after-ignition[1019]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:06:05.111420 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 00:06:05.153930 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 00:06:05.111776 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 00:06:05.132016 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:06:05.137216 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 00:06:05.143262 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 00:06:05.211975 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 00:06:05.212148 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 00:06:05.217702 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 00:06:05.223966 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 00:06:05.229075 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 00:06:05.230728 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 00:06:05.268966 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:06:05.276386 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 00:06:05.305323 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:06:05.305699 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:06:05.314311 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 00:06:05.317721 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 00:06:05.318605 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 00:06:05.325510 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 00:06:05.327393 systemd[1]: Stopped target basic.target - Basic System. Sep 4 00:06:05.332403 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 00:06:05.337587 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 00:06:05.340989 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 00:06:05.346487 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 00:06:05.351519 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 00:06:05.355455 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 00:06:05.360579 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 00:06:05.365642 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 00:06:05.370507 systemd[1]: Stopped target swap.target - Swaps. Sep 4 00:06:05.374542 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 00:06:05.375057 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 00:06:05.382271 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:06:05.385801 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:06:05.390338 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 00:06:05.390835 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:06:05.395659 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 00:06:05.396129 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 00:06:05.404154 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 00:06:05.404518 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 00:06:05.407685 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 00:06:05.407989 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 00:06:05.416684 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 00:06:05.426460 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 00:06:05.437092 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 00:06:05.437356 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:06:05.445295 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 00:06:05.445548 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 00:06:05.461780 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 00:06:05.461940 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 00:06:05.477323 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 00:06:05.479918 ignition[1044]: INFO : Ignition 2.21.0 Sep 4 00:06:05.479918 ignition[1044]: INFO : Stage: umount Sep 4 00:06:05.479918 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 00:06:05.479918 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 00:06:05.495045 ignition[1044]: INFO : umount: umount passed Sep 4 00:06:05.495045 ignition[1044]: INFO : Ignition finished successfully Sep 4 00:06:05.483122 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 00:06:05.483517 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 00:06:05.492577 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 00:06:05.492763 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 00:06:05.498803 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 00:06:05.498978 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 00:06:05.501978 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 00:06:05.502055 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 00:06:05.506142 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 00:06:05.506251 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 00:06:05.512980 systemd[1]: Stopped target network.target - Network. Sep 4 00:06:05.516949 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 00:06:05.517081 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 00:06:05.520261 systemd[1]: Stopped target paths.target - Path Units. Sep 4 00:06:05.524127 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 00:06:05.527863 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:06:05.528115 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 00:06:05.532136 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 00:06:05.536230 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 00:06:05.536307 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 00:06:05.540249 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 00:06:05.540336 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 00:06:05.544240 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 00:06:05.544348 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 00:06:05.548138 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 00:06:05.548377 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 00:06:05.552116 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 00:06:05.552313 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 00:06:05.555239 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 00:06:05.565074 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 00:06:05.568968 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 00:06:05.569225 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 00:06:05.578267 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 00:06:05.578574 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 00:06:05.578698 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 00:06:05.585046 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 00:06:05.586346 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 00:06:05.587068 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 00:06:05.587115 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:06:05.593392 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 00:06:05.594010 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 00:06:05.594223 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 00:06:05.594516 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:06:05.594806 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:06:05.595498 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 00:06:05.595715 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 00:06:05.603977 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 00:06:05.604088 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:06:05.613230 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:06:05.620777 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 00:06:05.620924 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:06:05.635227 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 00:06:05.635523 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:06:05.640054 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 00:06:05.640295 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 00:06:05.645898 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 00:06:05.645967 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 00:06:05.648164 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 00:06:05.648339 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:06:05.653138 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 00:06:05.653395 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 00:06:05.664263 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 00:06:05.664353 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 00:06:05.671207 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 00:06:05.671707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 00:06:05.680921 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 00:06:05.689076 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 00:06:05.689164 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:06:05.700692 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 00:06:05.700956 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:06:05.708288 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 00:06:05.708369 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:06:05.711366 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 00:06:05.711555 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:06:05.717355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 00:06:05.717548 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:06:05.724513 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 00:06:05.724638 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 4 00:06:05.820927 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Sep 4 00:06:05.724683 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 00:06:05.724747 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 00:06:05.725239 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 00:06:05.725354 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 00:06:05.729667 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 00:06:05.734028 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 00:06:05.768342 systemd[1]: Switching root. Sep 4 00:06:05.844988 systemd-journald[206]: Journal stopped Sep 4 00:06:07.952923 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 00:06:07.952992 kernel: SELinux: policy capability open_perms=1 Sep 4 00:06:07.953014 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 00:06:07.953032 kernel: SELinux: policy capability always_check_network=0 Sep 4 00:06:07.953050 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 00:06:07.953068 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 00:06:07.953092 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 00:06:07.953109 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 00:06:07.953440 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 00:06:07.958822 kernel: audit: type=1403 audit(1756944366.512:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 00:06:07.958850 systemd[1]: Successfully loaded SELinux policy in 54.122ms. Sep 4 00:06:07.958873 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.426ms. Sep 4 00:06:07.958897 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 00:06:07.958928 systemd[1]: Detected virtualization google. Sep 4 00:06:07.958951 systemd[1]: Detected architecture x86-64. Sep 4 00:06:07.958972 systemd[1]: Detected first boot. Sep 4 00:06:07.958995 systemd[1]: Initializing machine ID from random generator. Sep 4 00:06:07.959017 zram_generator::config[1088]: No configuration found. Sep 4 00:06:07.959045 kernel: Guest personality initialized and is inactive Sep 4 00:06:07.959065 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 00:06:07.959086 kernel: Initialized host personality Sep 4 00:06:07.959106 kernel: NET: Registered PF_VSOCK protocol family Sep 4 00:06:07.959127 systemd[1]: Populated /etc with preset unit settings. Sep 4 00:06:07.959150 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 00:06:07.959174 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 00:06:07.959200 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 00:06:07.959222 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 00:06:07.959244 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 00:06:07.959266 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 00:06:07.959288 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 00:06:07.959310 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 00:06:07.959332 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 00:06:07.959368 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 00:06:07.959392 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 00:06:07.959414 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 00:06:07.959436 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 00:06:07.959459 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 00:06:07.959481 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 00:06:07.959503 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 00:06:07.959526 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 00:06:07.959555 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 00:06:07.959582 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 00:06:07.959605 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 00:06:07.959628 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 00:06:07.959650 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 00:06:07.959673 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 00:06:07.959696 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 00:06:07.959717 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 00:06:07.959813 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 00:06:07.959840 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 00:06:07.959862 systemd[1]: Reached target slices.target - Slice Units. Sep 4 00:06:07.959886 systemd[1]: Reached target swap.target - Swaps. Sep 4 00:06:07.959909 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 00:06:07.959932 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 00:06:07.959955 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 00:06:07.959983 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 00:06:07.960006 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 00:06:07.960029 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 00:06:07.960052 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 00:06:07.960076 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 00:06:07.960102 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 00:06:07.960130 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 00:06:07.960154 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:06:07.960176 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 00:06:07.960198 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 00:06:07.960221 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 00:06:07.960246 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 00:06:07.960269 systemd[1]: Reached target machines.target - Containers. Sep 4 00:06:07.960291 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 00:06:07.960320 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:06:07.960343 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 00:06:07.960373 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 00:06:07.960397 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:06:07.960420 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:06:07.960444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:06:07.960467 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 00:06:07.960490 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:06:07.960520 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 00:06:07.960543 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 00:06:07.960566 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 00:06:07.960589 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 00:06:07.960615 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 00:06:07.960639 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:06:07.960663 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 00:06:07.960686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 00:06:07.960714 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 00:06:07.962805 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 00:06:07.962843 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 00:06:07.962867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 00:06:07.962890 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 00:06:07.962913 systemd[1]: Stopped verity-setup.service. Sep 4 00:06:07.962936 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:06:07.962958 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 00:06:07.962981 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 00:06:07.963010 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 00:06:07.963031 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 00:06:07.963053 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 00:06:07.963128 systemd-journald[1159]: Collecting audit messages is disabled. Sep 4 00:06:07.963179 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 00:06:07.963204 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 00:06:07.963227 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 00:06:07.963251 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 00:06:07.963274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:06:07.963296 systemd-journald[1159]: Journal started Sep 4 00:06:07.963345 systemd-journald[1159]: Runtime Journal (/run/log/journal/f17f1937e0c4452cbe0c9f458fe52675) is 8M, max 148.9M, 140.9M free. Sep 4 00:06:07.440780 systemd[1]: Queued start job for default target multi-user.target. Sep 4 00:06:07.965134 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:06:07.456034 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 00:06:07.456651 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 00:06:07.975027 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 00:06:07.978416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:06:07.980152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:06:07.984121 kernel: loop: module loaded Sep 4 00:06:07.985437 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 00:06:07.990704 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 00:06:07.998682 kernel: fuse: init (API version 7.41) Sep 4 00:06:07.998944 kernel: ACPI: bus type drm_connector registered Sep 4 00:06:08.002526 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:06:08.004092 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:06:08.007694 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 00:06:08.009825 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 00:06:08.016618 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:06:08.018167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:06:08.046004 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 00:06:08.051825 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 00:06:08.060172 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 00:06:08.065297 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 00:06:08.072883 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 00:06:08.079963 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 00:06:08.080107 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 00:06:08.080163 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 00:06:08.084394 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 00:06:08.091922 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 00:06:08.095896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:06:08.102062 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 00:06:08.108976 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 00:06:08.112926 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:06:08.116526 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 00:06:08.119925 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:06:08.121998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:06:08.144045 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 00:06:08.151749 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 00:06:08.157156 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 00:06:08.161167 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 00:06:08.201450 systemd-journald[1159]: Time spent on flushing to /var/log/journal/f17f1937e0c4452cbe0c9f458fe52675 is 86.209ms for 968 entries. Sep 4 00:06:08.201450 systemd-journald[1159]: System Journal (/var/log/journal/f17f1937e0c4452cbe0c9f458fe52675) is 8M, max 584.8M, 576.8M free. Sep 4 00:06:08.330425 systemd-journald[1159]: Received client request to flush runtime journal. Sep 4 00:06:08.330527 kernel: loop0: detected capacity change from 0 to 146240 Sep 4 00:06:08.330562 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 00:06:08.241870 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 00:06:08.248584 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 00:06:08.252989 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 00:06:08.266969 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 00:06:08.331503 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:06:08.335600 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 00:06:08.357529 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 00:06:08.374774 kernel: loop1: detected capacity change from 0 to 113872 Sep 4 00:06:08.382829 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 4 00:06:08.382864 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 4 00:06:08.399647 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 00:06:08.412038 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 00:06:08.441824 kernel: loop2: detected capacity change from 0 to 224512 Sep 4 00:06:08.459401 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 00:06:08.513681 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 00:06:08.522088 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 00:06:08.575897 kernel: loop3: detected capacity change from 0 to 52072 Sep 4 00:06:08.643809 kernel: loop4: detected capacity change from 0 to 146240 Sep 4 00:06:08.650559 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Sep 4 00:06:08.652193 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Sep 4 00:06:08.677379 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 00:06:08.706775 kernel: loop5: detected capacity change from 0 to 113872 Sep 4 00:06:08.754951 kernel: loop6: detected capacity change from 0 to 224512 Sep 4 00:06:08.813818 kernel: loop7: detected capacity change from 0 to 52072 Sep 4 00:06:08.849374 (sd-merge)[1236]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 4 00:06:08.850988 (sd-merge)[1236]: Merged extensions into '/usr'. Sep 4 00:06:08.867990 systemd[1]: Reload requested from client PID 1210 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 00:06:08.868271 systemd[1]: Reloading... Sep 4 00:06:09.155779 zram_generator::config[1266]: No configuration found. Sep 4 00:06:09.318240 ldconfig[1205]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 00:06:09.407694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:06:09.615312 systemd[1]: Reloading finished in 746 ms. Sep 4 00:06:09.632614 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 00:06:09.636760 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 00:06:09.656999 systemd[1]: Starting ensure-sysext.service... Sep 4 00:06:09.664027 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 00:06:09.709454 systemd[1]: Reload requested from client PID 1303 ('systemctl') (unit ensure-sysext.service)... Sep 4 00:06:09.709482 systemd[1]: Reloading... Sep 4 00:06:09.775101 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 00:06:09.776864 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 00:06:09.777367 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 00:06:09.777910 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 00:06:09.780329 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 00:06:09.781099 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Sep 4 00:06:09.781366 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Sep 4 00:06:09.791825 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:06:09.792795 systemd-tmpfiles[1304]: Skipping /boot Sep 4 00:06:09.837350 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 00:06:09.838021 systemd-tmpfiles[1304]: Skipping /boot Sep 4 00:06:09.905800 zram_generator::config[1331]: No configuration found. Sep 4 00:06:10.063296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:06:10.182581 systemd[1]: Reloading finished in 472 ms. Sep 4 00:06:10.207449 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 00:06:10.226213 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 00:06:10.242406 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:06:10.256834 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 00:06:10.263939 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 00:06:10.275432 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 00:06:10.289177 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 00:06:10.296304 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 00:06:10.309939 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:06:10.310316 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:06:10.314655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 00:06:10.327650 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 00:06:10.335484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 00:06:10.339146 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:06:10.339816 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:06:10.347086 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 00:06:10.350927 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:06:10.361448 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:06:10.362116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:06:10.362403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:06:10.362569 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:06:10.363800 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:06:10.379006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 00:06:10.380862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 00:06:10.387176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 00:06:10.387558 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 00:06:10.411915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:06:10.412296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 00:06:10.423688 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 00:06:10.434143 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 00:06:10.436108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 00:06:10.436195 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 00:06:10.436293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 00:06:10.436374 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 00:06:10.439939 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 00:06:10.440894 systemd[1]: Finished ensure-sysext.service. Sep 4 00:06:10.456180 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 00:06:10.457586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 00:06:10.464016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 00:06:10.481875 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 00:06:10.486628 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 00:06:10.487309 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 00:06:10.529076 augenrules[1412]: No rules Sep 4 00:06:10.532020 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:06:10.535269 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:06:10.543071 systemd-udevd[1381]: Using default interface naming scheme 'v255'. Sep 4 00:06:10.544603 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 00:06:10.547232 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 00:06:10.555007 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 4 00:06:10.558515 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 00:06:10.576968 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 00:06:10.632889 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 00:06:10.636275 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 00:06:10.667533 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 00:06:10.671366 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 00:06:10.681969 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 00:06:10.705381 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 4 00:06:10.770682 systemd-resolved[1376]: Positive Trust Anchors: Sep 4 00:06:10.770710 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 00:06:10.771292 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 00:06:10.789635 systemd-resolved[1376]: Defaulting to hostname 'linux'. Sep 4 00:06:10.795826 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 00:06:10.799033 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 00:06:10.800898 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 00:06:10.805064 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 00:06:10.809008 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 00:06:10.811915 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 00:06:10.817252 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 00:06:10.821234 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 00:06:10.826942 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 00:06:10.829632 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 00:06:10.829719 systemd[1]: Reached target paths.target - Path Units. Sep 4 00:06:10.832960 systemd[1]: Reached target timers.target - Timer Units. Sep 4 00:06:10.840893 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 00:06:10.850211 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 00:06:10.857897 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 00:06:10.862188 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 00:06:10.865897 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 00:06:10.879493 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 00:06:10.883960 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 00:06:10.892933 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 00:06:10.898990 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 00:06:10.900968 systemd[1]: Reached target basic.target - Basic System. Sep 4 00:06:10.904038 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:06:10.904091 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 00:06:10.907851 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 00:06:10.915047 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 00:06:10.926198 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 00:06:10.934365 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 00:06:10.945194 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 00:06:10.948927 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 00:06:10.953140 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 00:06:10.966120 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 00:06:10.994051 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 00:06:10.998666 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 00:06:11.014117 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 00:06:11.027408 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 00:06:11.043115 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 00:06:11.050503 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 4 00:06:11.061230 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 00:06:11.067264 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 00:06:11.081096 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 00:06:11.086428 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Sep 4 00:06:11.095097 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 00:06:11.095936 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 4 00:06:11.116649 oslogin_cache_refresh[1478]: Refreshing passwd entry cache Sep 4 00:06:11.118473 google_oslogin_nss_cache[1478]: oslogin_cache_refresh[1478]: Refreshing passwd entry cache Sep 4 00:06:11.106883 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 00:06:11.180815 jq[1475]: false Sep 4 00:06:11.199297 google_oslogin_nss_cache[1478]: oslogin_cache_refresh[1478]: Failure getting users, quitting Sep 4 00:06:11.201252 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 00:06:11.203850 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 00:06:11.207797 oslogin_cache_refresh[1478]: Failure getting users, quitting Sep 4 00:06:11.210334 google_oslogin_nss_cache[1478]: oslogin_cache_refresh[1478]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:06:11.210334 google_oslogin_nss_cache[1478]: oslogin_cache_refresh[1478]: Refreshing group entry cache Sep 4 00:06:11.210334 google_oslogin_nss_cache[1478]: oslogin_cache_refresh[1478]: Failure getting groups, quitting Sep 4 00:06:11.210334 google_oslogin_nss_cache[1478]: oslogin_cache_refresh[1478]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:06:11.207874 oslogin_cache_refresh[1478]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 00:06:11.207972 oslogin_cache_refresh[1478]: Refreshing group entry cache Sep 4 00:06:11.209264 oslogin_cache_refresh[1478]: Failure getting groups, quitting Sep 4 00:06:11.209289 oslogin_cache_refresh[1478]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 00:06:11.243281 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 00:06:11.244048 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 00:06:11.274792 jq[1488]: true Sep 4 00:06:11.304178 jq[1510]: true Sep 4 00:06:11.301007 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 00:06:11.302658 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 00:06:11.337650 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 00:06:11.339220 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 00:06:11.354149 extend-filesystems[1476]: Found /dev/sda6 Sep 4 00:06:11.359496 update_engine[1487]: I20250904 00:06:11.356575 1487 main.cc:92] Flatcar Update Engine starting Sep 4 00:06:11.362366 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 00:06:11.386795 extend-filesystems[1476]: Found /dev/sda9 Sep 4 00:06:11.399267 extend-filesystems[1476]: Checking size of /dev/sda9 Sep 4 00:06:11.425059 tar[1492]: linux-amd64/LICENSE Sep 4 00:06:11.425059 tar[1492]: linux-amd64/helm Sep 4 00:06:11.495767 extend-filesystems[1476]: Resized partition /dev/sda9 Sep 4 00:06:11.499920 coreos-metadata[1472]: Sep 04 00:06:11.498 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 4 00:06:11.499920 coreos-metadata[1472]: Sep 04 00:06:11.499 INFO Failed to fetch: error sending request for url (http://169.254.169.254/computeMetadata/v1/instance/hostname) Sep 4 00:06:11.511094 extend-filesystems[1540]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 00:06:11.517771 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:06:11.520476 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 00:06:11.531256 systemd[1]: Starting sshkeys.service... Sep 4 00:06:11.546442 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 4 00:06:11.563792 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 4 00:06:11.583768 extend-filesystems[1540]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 4 00:06:11.583768 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 4 00:06:11.583768 extend-filesystems[1540]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 4 00:06:11.592889 extend-filesystems[1476]: Resized filesystem in /dev/sda9 Sep 4 00:06:11.586867 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 00:06:11.587426 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 00:06:11.633112 dbus-daemon[1473]: [system] SELinux support is enabled Sep 4 00:06:11.638309 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 00:06:11.640762 update_engine[1487]: I20250904 00:06:11.640246 1487 update_check_scheduler.cc:74] Next update check in 10m12s Sep 4 00:06:11.650190 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 00:06:11.657209 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 00:06:11.659194 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 00:06:11.659478 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 00:06:11.663019 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 00:06:11.663227 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 00:06:11.676139 systemd[1]: Started update-engine.service - Update Engine. Sep 4 00:06:11.685386 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 00:06:11.707144 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 4 00:06:11.801843 coreos-metadata[1545]: Sep 04 00:06:11.800 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 4 00:06:11.803775 coreos-metadata[1545]: Sep 04 00:06:11.803 INFO Failed to fetch: error sending request for url (http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys) Sep 4 00:06:11.813947 kernel: ACPI: button: Power Button [PWRF] Sep 4 00:06:11.844658 systemd-networkd[1440]: lo: Link UP Sep 4 00:06:11.846245 systemd-networkd[1440]: lo: Gained carrier Sep 4 00:06:11.896676 systemd-networkd[1440]: Enumeration completed Sep 4 00:06:11.905122 ntpd[1480]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:33:36 UTC 2025 (1): Starting Sep 4 00:06:11.907146 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:06:11.908615 ntpd[1480]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 00:06:11.910986 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:33:36 UTC 2025 (1): Starting Sep 4 00:06:11.910986 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 00:06:11.910986 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: ---------------------------------------------------- Sep 4 00:06:11.910986 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: ntp-4 is maintained by Network Time Foundation, Sep 4 00:06:11.910986 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 00:06:11.910986 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: corporation. Support and training for ntp-4 are Sep 4 00:06:11.910986 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: available at https://www.nwtime.org/support Sep 4 00:06:11.910986 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: ---------------------------------------------------- Sep 4 00:06:11.907160 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 00:06:11.908630 ntpd[1480]: ---------------------------------------------------- Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: proto: precision = 0.112 usec (-23) Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: basedate set to 2025-08-22 Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: gps base set to 2025-08-24 (week 2381) Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: Listen normally on 3 lo [::1]:123 Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: bind(20) AF_INET6 fe80::4001:aff:fe80:51%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: unable to create socket on eth0 (4) for fe80::4001:aff:fe80:51%2#123 Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: failed to init interface for address fe80::4001:aff:fe80:51%2 Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: Listening on routing socket on fd #20 for interface updates Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:06:11.956442 ntpd[1480]: 4 Sep 00:06:11 ntpd[1480]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:06:11.914891 systemd-networkd[1440]: eth0: Link UP Sep 4 00:06:11.908645 ntpd[1480]: ntp-4 is maintained by Network Time Foundation, Sep 4 00:06:11.917926 systemd-networkd[1440]: eth0: Gained carrier Sep 4 00:06:11.908659 ntpd[1480]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 00:06:11.917966 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 00:06:11.908671 ntpd[1480]: corporation. Support and training for ntp-4 are Sep 4 00:06:11.933063 systemd-networkd[1440]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532.c.flatcar-212911.internal' to 'ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532' Sep 4 00:06:11.908684 ntpd[1480]: available at https://www.nwtime.org/support Sep 4 00:06:11.933110 systemd-networkd[1440]: eth0: DHCPv4 address 10.128.0.81/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 4 00:06:11.908697 ntpd[1480]: ---------------------------------------------------- Sep 4 00:06:11.916694 ntpd[1480]: proto: precision = 0.112 usec (-23) Sep 4 00:06:11.920550 ntpd[1480]: basedate set to 2025-08-22 Sep 4 00:06:11.920576 ntpd[1480]: gps base set to 2025-08-24 (week 2381) Sep 4 00:06:11.930238 ntpd[1480]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 00:06:11.930298 ntpd[1480]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 00:06:11.930513 ntpd[1480]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 00:06:11.930566 ntpd[1480]: Listen normally on 3 lo [::1]:123 Sep 4 00:06:11.930628 ntpd[1480]: bind(20) AF_INET6 fe80::4001:aff:fe80:51%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 00:06:11.930655 ntpd[1480]: unable to create socket on eth0 (4) for fe80::4001:aff:fe80:51%2#123 Sep 4 00:06:11.930675 ntpd[1480]: failed to init interface for address fe80::4001:aff:fe80:51%2 Sep 4 00:06:11.930712 ntpd[1480]: Listening on routing socket on fd #20 for interface updates Sep 4 00:06:11.933356 dbus-daemon[1473]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1440 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 00:06:11.943166 ntpd[1480]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:06:11.943214 ntpd[1480]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 00:06:11.981687 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 00:06:12.002138 systemd[1]: Reached target network.target - Network. Sep 4 00:06:12.006186 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 4 00:06:12.009030 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 00:06:12.020394 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 00:06:12.051910 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 00:06:12.062836 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 00:06:12.109969 sshd_keygen[1520]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 00:06:12.115096 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 00:06:12.138854 kernel: ACPI: button: Sleep Button [SLPF] Sep 4 00:06:12.187964 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 00:06:12.201381 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 00:06:12.203305 systemd-logind[1486]: New seat seat0. Sep 4 00:06:12.227611 systemd[1]: Started sshd@0-10.128.0.81:22-147.75.109.163:48770.service - OpenSSH per-connection server daemon (147.75.109.163:48770). Sep 4 00:06:12.233334 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 00:06:12.250218 (ntainerd)[1569]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 00:06:12.286721 kernel: EDAC MC: Ver: 3.0.0 Sep 4 00:06:12.318037 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 00:06:12.318584 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 00:06:12.377242 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 4 00:06:12.388504 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 00:06:12.399827 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 4 00:06:12.406413 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 00:06:12.424382 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 00:06:12.475508 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 00:06:12.497106 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 00:06:12.500081 coreos-metadata[1472]: Sep 04 00:06:12.499 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #2 Sep 4 00:06:12.516276 coreos-metadata[1472]: Sep 04 00:06:12.509 INFO Fetch successful Sep 4 00:06:12.516276 coreos-metadata[1472]: Sep 04 00:06:12.510 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 4 00:06:12.516276 coreos-metadata[1472]: Sep 04 00:06:12.513 INFO Fetch successful Sep 4 00:06:12.516276 coreos-metadata[1472]: Sep 04 00:06:12.514 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 4 00:06:12.510431 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 00:06:12.518547 coreos-metadata[1472]: Sep 04 00:06:12.518 INFO Fetch successful Sep 4 00:06:12.518547 coreos-metadata[1472]: Sep 04 00:06:12.518 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 4 00:06:12.521360 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 00:06:12.524664 coreos-metadata[1472]: Sep 04 00:06:12.523 INFO Fetch successful Sep 4 00:06:12.603512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 00:06:12.622320 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 00:06:12.622941 locksmithd[1546]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 00:06:12.677229 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 00:06:12.689400 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 00:06:12.742938 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 00:06:12.808959 coreos-metadata[1545]: Sep 04 00:06:12.808 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #2 Sep 4 00:06:12.813059 coreos-metadata[1545]: Sep 04 00:06:12.811 INFO Fetch failed with 404: resource not found Sep 4 00:06:12.813059 coreos-metadata[1545]: Sep 04 00:06:12.811 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 4 00:06:12.814120 coreos-metadata[1545]: Sep 04 00:06:12.813 INFO Fetch successful Sep 4 00:06:12.814871 coreos-metadata[1545]: Sep 04 00:06:12.814 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 4 00:06:12.817855 coreos-metadata[1545]: Sep 04 00:06:12.816 INFO Fetch failed with 404: resource not found Sep 4 00:06:12.817855 coreos-metadata[1545]: Sep 04 00:06:12.817 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 4 00:06:12.823796 coreos-metadata[1545]: Sep 04 00:06:12.820 INFO Fetch failed with 404: resource not found Sep 4 00:06:12.823796 coreos-metadata[1545]: Sep 04 00:06:12.820 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 4 00:06:12.823796 coreos-metadata[1545]: Sep 04 00:06:12.820 INFO Fetch successful Sep 4 00:06:12.833850 unknown[1545]: wrote ssh authorized keys file for user: core Sep 4 00:06:12.894896 systemd-logind[1486]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 00:06:12.917701 update-ssh-keys[1617]: Updated "/home/core/.ssh/authorized_keys" Sep 4 00:06:12.922609 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 00:06:12.929370 systemd[1]: Finished sshkeys.service. Sep 4 00:06:13.028776 sshd[1572]: Accepted publickey for core from 147.75.109.163 port 48770 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:06:13.034796 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 00:06:13.040893 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:06:13.171964 dbus-daemon[1473]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 00:06:13.175454 dbus-daemon[1473]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1554 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 00:06:13.196761 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 00:06:13.296293 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 00:06:13.313392 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 00:06:13.324568 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 00:06:13.403037 systemd-networkd[1440]: eth0: Gained IPv6LL Sep 4 00:06:13.410914 systemd-logind[1486]: New session 1 of user core. Sep 4 00:06:13.411619 systemd-logind[1486]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 4 00:06:13.418057 containerd[1569]: time="2025-09-04T00:06:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 00:06:13.418457 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 00:06:13.424715 containerd[1569]: time="2025-09-04T00:06:13.424606062Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 4 00:06:13.440641 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 00:06:13.453102 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 00:06:13.469188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:06:13.479452 containerd[1569]: time="2025-09-04T00:06:13.478717648Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.998µs" Sep 4 00:06:13.479452 containerd[1569]: time="2025-09-04T00:06:13.479450848Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 00:06:13.479635 containerd[1569]: time="2025-09-04T00:06:13.479484910Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 00:06:13.479720 containerd[1569]: time="2025-09-04T00:06:13.479692845Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 00:06:13.481134 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 00:06:13.484225 containerd[1569]: time="2025-09-04T00:06:13.484162572Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 00:06:13.484321 containerd[1569]: time="2025-09-04T00:06:13.484270708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:06:13.484423 containerd[1569]: time="2025-09-04T00:06:13.484394582Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 00:06:13.484489 containerd[1569]: time="2025-09-04T00:06:13.484423200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:06:13.484935 containerd[1569]: time="2025-09-04T00:06:13.484892741Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 00:06:13.485042 containerd[1569]: time="2025-09-04T00:06:13.484934839Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:06:13.485042 containerd[1569]: time="2025-09-04T00:06:13.484995326Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 00:06:13.485042 containerd[1569]: time="2025-09-04T00:06:13.485013973Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 00:06:13.485180 containerd[1569]: time="2025-09-04T00:06:13.485144320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 00:06:13.485565 containerd[1569]: time="2025-09-04T00:06:13.485458820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:06:13.485565 containerd[1569]: time="2025-09-04T00:06:13.485522042Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 00:06:13.485565 containerd[1569]: time="2025-09-04T00:06:13.485542544Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 00:06:13.490660 containerd[1569]: time="2025-09-04T00:06:13.489906983Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 00:06:13.490660 containerd[1569]: time="2025-09-04T00:06:13.490626893Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 00:06:13.490871 containerd[1569]: time="2025-09-04T00:06:13.490783273Z" level=info msg="metadata content store policy set" policy=shared Sep 4 00:06:13.492117 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 4 00:06:13.504364 containerd[1569]: time="2025-09-04T00:06:13.501452461Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 00:06:13.504130 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 00:06:13.517657 containerd[1569]: time="2025-09-04T00:06:13.510904995Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 00:06:13.517657 containerd[1569]: time="2025-09-04T00:06:13.510981164Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 00:06:13.517657 containerd[1569]: time="2025-09-04T00:06:13.511006721Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 00:06:13.517657 containerd[1569]: time="2025-09-04T00:06:13.511505803Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 00:06:13.517657 containerd[1569]: time="2025-09-04T00:06:13.511578528Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 00:06:13.517657 containerd[1569]: time="2025-09-04T00:06:13.511607905Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 00:06:13.517657 containerd[1569]: time="2025-09-04T00:06:13.511629117Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 00:06:13.517657 containerd[1569]: time="2025-09-04T00:06:13.511664585Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 00:06:13.517657 containerd[1569]: time="2025-09-04T00:06:13.511681011Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 00:06:13.517657 containerd[1569]: time="2025-09-04T00:06:13.511698470Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 00:06:13.517657 containerd[1569]: time="2025-09-04T00:06:13.513385178Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 00:06:13.520854 containerd[1569]: time="2025-09-04T00:06:13.518832228Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 00:06:13.520854 containerd[1569]: time="2025-09-04T00:06:13.519066537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.524880071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.524961088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.524985853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.525005691Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.525029345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.525048784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.525074987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.525099697Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.525122354Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.525232037Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.525258652Z" level=info msg="Start snapshots syncer" Sep 4 00:06:13.525829 containerd[1569]: time="2025-09-04T00:06:13.525301837Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 00:06:13.526443 containerd[1569]: time="2025-09-04T00:06:13.525645025Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 00:06:13.536912 containerd[1569]: time="2025-09-04T00:06:13.536227075Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 00:06:13.537166 containerd[1569]: time="2025-09-04T00:06:13.537096940Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 00:06:13.539008 containerd[1569]: time="2025-09-04T00:06:13.538520833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 00:06:13.539008 containerd[1569]: time="2025-09-04T00:06:13.538597476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 00:06:13.539008 containerd[1569]: time="2025-09-04T00:06:13.538623280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 00:06:13.539008 containerd[1569]: time="2025-09-04T00:06:13.538645103Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 00:06:13.539008 containerd[1569]: time="2025-09-04T00:06:13.538683094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 00:06:13.539008 containerd[1569]: time="2025-09-04T00:06:13.538702328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 00:06:13.539008 containerd[1569]: time="2025-09-04T00:06:13.538760580Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 00:06:13.539008 containerd[1569]: time="2025-09-04T00:06:13.538843574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 00:06:13.539008 containerd[1569]: time="2025-09-04T00:06:13.538867768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 00:06:13.539008 containerd[1569]: time="2025-09-04T00:06:13.538887053Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 00:06:13.539008 containerd[1569]: time="2025-09-04T00:06:13.538975932Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:06:13.539675 containerd[1569]: time="2025-09-04T00:06:13.539641239Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 00:06:13.539675 containerd[1569]: time="2025-09-04T00:06:13.539705089Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:06:13.541287 containerd[1569]: time="2025-09-04T00:06:13.540894050Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 00:06:13.541287 containerd[1569]: time="2025-09-04T00:06:13.540928034Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 00:06:13.541287 containerd[1569]: time="2025-09-04T00:06:13.540971315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 00:06:13.541287 containerd[1569]: time="2025-09-04T00:06:13.540992883Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 00:06:13.541287 containerd[1569]: time="2025-09-04T00:06:13.541013723Z" level=info msg="runtime interface created" Sep 4 00:06:13.541287 containerd[1569]: time="2025-09-04T00:06:13.541042806Z" level=info msg="created NRI interface" Sep 4 00:06:13.541287 containerd[1569]: time="2025-09-04T00:06:13.541059290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 00:06:13.541287 containerd[1569]: time="2025-09-04T00:06:13.541082559Z" level=info msg="Connect containerd service" Sep 4 00:06:13.541287 containerd[1569]: time="2025-09-04T00:06:13.541156305Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 00:06:13.548447 (systemd)[1636]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 00:06:13.554207 containerd[1569]: time="2025-09-04T00:06:13.551271193Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:06:13.554279 init.sh[1635]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 4 00:06:13.554279 init.sh[1635]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 4 00:06:13.555770 init.sh[1635]: + /usr/bin/google_instance_setup Sep 4 00:06:13.565684 systemd-logind[1486]: New session c1 of user core. Sep 4 00:06:13.654365 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 00:06:13.833894 polkitd[1627]: Started polkitd version 126 Sep 4 00:06:13.856246 polkitd[1627]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 00:06:13.869142 polkitd[1627]: Loading rules from directory /run/polkit-1/rules.d Sep 4 00:06:13.869927 polkitd[1627]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 4 00:06:13.874167 polkitd[1627]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 4 00:06:13.874222 polkitd[1627]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 4 00:06:13.874301 polkitd[1627]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 00:06:13.885218 polkitd[1627]: Finished loading, compiling and executing 2 rules Sep 4 00:06:13.888197 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 00:06:13.894105 dbus-daemon[1473]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 00:06:13.896375 tar[1492]: linux-amd64/README.md Sep 4 00:06:13.901898 polkitd[1627]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 00:06:13.959821 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 00:06:14.010448 systemd-hostnamed[1554]: Hostname set to (transient) Sep 4 00:06:14.014136 systemd-resolved[1376]: System hostname changed to 'ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532'. Sep 4 00:06:14.059251 systemd[1636]: Queued start job for default target default.target. Sep 4 00:06:14.066647 systemd[1636]: Created slice app.slice - User Application Slice. Sep 4 00:06:14.066703 systemd[1636]: Reached target paths.target - Paths. Sep 4 00:06:14.067982 systemd[1636]: Reached target timers.target - Timers. Sep 4 00:06:14.076641 systemd[1636]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 00:06:14.095888 containerd[1569]: time="2025-09-04T00:06:14.094921942Z" level=info msg="Start subscribing containerd event" Sep 4 00:06:14.095888 containerd[1569]: time="2025-09-04T00:06:14.095104282Z" level=info msg="Start recovering state" Sep 4 00:06:14.095888 containerd[1569]: time="2025-09-04T00:06:14.095439884Z" level=info msg="Start event monitor" Sep 4 00:06:14.095888 containerd[1569]: time="2025-09-04T00:06:14.095469369Z" level=info msg="Start cni network conf syncer for default" Sep 4 00:06:14.095888 containerd[1569]: time="2025-09-04T00:06:14.095605591Z" level=info msg="Start streaming server" Sep 4 00:06:14.095888 containerd[1569]: time="2025-09-04T00:06:14.095625627Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 00:06:14.095888 containerd[1569]: time="2025-09-04T00:06:14.095639081Z" level=info msg="runtime interface starting up..." Sep 4 00:06:14.095888 containerd[1569]: time="2025-09-04T00:06:14.095650205Z" level=info msg="starting plugins..." Sep 4 00:06:14.096359 containerd[1569]: time="2025-09-04T00:06:14.095982342Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 00:06:14.096359 containerd[1569]: time="2025-09-04T00:06:14.096140482Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 00:06:14.096359 containerd[1569]: time="2025-09-04T00:06:14.096215986Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 00:06:14.096451 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 00:06:14.098190 containerd[1569]: time="2025-09-04T00:06:14.096763565Z" level=info msg="containerd successfully booted in 0.680054s" Sep 4 00:06:14.121942 systemd[1636]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 00:06:14.122158 systemd[1636]: Reached target sockets.target - Sockets. Sep 4 00:06:14.122430 systemd[1636]: Reached target basic.target - Basic System. Sep 4 00:06:14.122569 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 00:06:14.122893 systemd[1636]: Reached target default.target - Main User Target. Sep 4 00:06:14.122951 systemd[1636]: Startup finished in 527ms. Sep 4 00:06:14.140091 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 00:06:14.396223 systemd[1]: Started sshd@1-10.128.0.81:22-147.75.109.163:48772.service - OpenSSH per-connection server daemon (147.75.109.163:48772). Sep 4 00:06:14.605486 instance-setup[1641]: INFO Running google_set_multiqueue. Sep 4 00:06:14.629727 instance-setup[1641]: INFO Set channels for eth0 to 2. Sep 4 00:06:14.635605 instance-setup[1641]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 4 00:06:14.638293 instance-setup[1641]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 4 00:06:14.639372 instance-setup[1641]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 4 00:06:14.642160 instance-setup[1641]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 4 00:06:14.642774 instance-setup[1641]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 4 00:06:14.645941 instance-setup[1641]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 4 00:06:14.647470 instance-setup[1641]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 4 00:06:14.651838 instance-setup[1641]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 4 00:06:14.663239 instance-setup[1641]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 4 00:06:14.669964 instance-setup[1641]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 4 00:06:14.673380 instance-setup[1641]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 4 00:06:14.674195 instance-setup[1641]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 4 00:06:14.707995 init.sh[1635]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 4 00:06:14.742597 sshd[1684]: Accepted publickey for core from 147.75.109.163 port 48772 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:06:14.747779 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:06:14.768381 systemd-logind[1486]: New session 2 of user core. Sep 4 00:06:14.772047 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 00:06:14.916166 startup-script[1714]: INFO Starting startup scripts. Sep 4 00:06:14.924274 startup-script[1714]: INFO No startup scripts found in metadata. Sep 4 00:06:14.924369 startup-script[1714]: INFO Finished running startup scripts. Sep 4 00:06:14.964067 init.sh[1635]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 4 00:06:14.964330 init.sh[1635]: + daemon_pids=() Sep 4 00:06:14.964452 init.sh[1635]: + for d in accounts clock_skew network Sep 4 00:06:14.965302 init.sh[1719]: + /usr/bin/google_accounts_daemon Sep 4 00:06:14.966602 init.sh[1635]: + daemon_pids+=($!) Sep 4 00:06:14.966602 init.sh[1635]: + for d in accounts clock_skew network Sep 4 00:06:14.966602 init.sh[1635]: + daemon_pids+=($!) Sep 4 00:06:14.966602 init.sh[1635]: + for d in accounts clock_skew network Sep 4 00:06:14.966854 init.sh[1635]: + daemon_pids+=($!) Sep 4 00:06:14.966854 init.sh[1635]: + NOTIFY_SOCKET=/run/systemd/notify Sep 4 00:06:14.966854 init.sh[1635]: + /usr/bin/systemd-notify --ready Sep 4 00:06:14.967239 init.sh[1721]: + /usr/bin/google_network_daemon Sep 4 00:06:14.968776 init.sh[1720]: + /usr/bin/google_clock_skew_daemon Sep 4 00:06:14.984779 sshd[1715]: Connection closed by 147.75.109.163 port 48772 Sep 4 00:06:14.987095 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Sep 4 00:06:15.001652 systemd[1]: sshd@1-10.128.0.81:22-147.75.109.163:48772.service: Deactivated successfully. Sep 4 00:06:15.009244 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 00:06:15.016183 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Sep 4 00:06:15.019807 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 4 00:06:15.035202 init.sh[1635]: + wait -n 1719 1720 1721 Sep 4 00:06:15.054947 systemd-logind[1486]: Removed session 2. Sep 4 00:06:15.055131 systemd[1]: Started sshd@2-10.128.0.81:22-147.75.109.163:48778.service - OpenSSH per-connection server daemon (147.75.109.163:48778). Sep 4 00:06:15.442185 sshd[1727]: Accepted publickey for core from 147.75.109.163 port 48778 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:06:15.447995 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:06:15.465855 systemd-logind[1486]: New session 3 of user core. Sep 4 00:06:15.470077 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 00:06:15.572908 google-clock-skew[1720]: INFO Starting Google Clock Skew daemon. Sep 4 00:06:15.595338 google-networking[1721]: INFO Starting Google Networking daemon. Sep 4 00:06:15.599443 google-clock-skew[1720]: INFO Clock drift token has changed: 0. Sep 4 00:06:15.623008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:15.635210 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 00:06:15.645301 systemd[1]: Startup finished in 3.839s (kernel) + 8.687s (initrd) + 9.183s (userspace) = 21.710s. Sep 4 00:06:15.656514 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:06:15.695871 sshd[1735]: Connection closed by 147.75.109.163 port 48778 Sep 4 00:06:15.692607 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 4 00:06:15.716166 groupadd[1742]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 4 00:06:15.717256 systemd[1]: sshd@2-10.128.0.81:22-147.75.109.163:48778.service: Deactivated successfully. Sep 4 00:06:15.721863 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 00:06:15.724156 groupadd[1742]: group added to /etc/gshadow: name=google-sudoers Sep 4 00:06:15.724669 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Sep 4 00:06:15.728383 systemd-logind[1486]: Removed session 3. Sep 4 00:06:15.782297 groupadd[1742]: new group: name=google-sudoers, GID=1000 Sep 4 00:06:15.818697 google-accounts[1719]: INFO Starting Google Accounts daemon. Sep 4 00:06:15.834291 google-accounts[1719]: WARNING OS Login not installed. Sep 4 00:06:15.837035 google-accounts[1719]: INFO Creating a new user account for 0. Sep 4 00:06:15.842496 init.sh[1760]: useradd: invalid user name '0': use --badname to ignore Sep 4 00:06:15.843556 google-accounts[1719]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 4 00:06:15.909374 ntpd[1480]: Listen normally on 5 eth0 10.128.0.81:123 Sep 4 00:06:15.910189 ntpd[1480]: 4 Sep 00:06:15 ntpd[1480]: Listen normally on 5 eth0 10.128.0.81:123 Sep 4 00:06:15.910189 ntpd[1480]: 4 Sep 00:06:15 ntpd[1480]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:51%2]:123 Sep 4 00:06:15.909494 ntpd[1480]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:51%2]:123 Sep 4 00:06:16.000149 systemd-resolved[1376]: Clock change detected. Flushing caches. Sep 4 00:06:16.001183 google-clock-skew[1720]: INFO Synced system time with hardware clock. Sep 4 00:06:16.334540 kubelet[1745]: E0904 00:06:16.334339 1745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:06:16.338384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:06:16.338674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:06:16.339442 systemd[1]: kubelet.service: Consumed 1.359s CPU time, 266.9M memory peak. Sep 4 00:06:25.566110 systemd[1]: Started sshd@3-10.128.0.81:22-147.75.109.163:41586.service - OpenSSH per-connection server daemon (147.75.109.163:41586). Sep 4 00:06:25.882804 sshd[1769]: Accepted publickey for core from 147.75.109.163 port 41586 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:06:25.884749 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:06:25.891813 systemd-logind[1486]: New session 4 of user core. Sep 4 00:06:25.904308 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 00:06:26.099221 sshd[1771]: Connection closed by 147.75.109.163 port 41586 Sep 4 00:06:26.100197 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Sep 4 00:06:26.106661 systemd[1]: sshd@3-10.128.0.81:22-147.75.109.163:41586.service: Deactivated successfully. Sep 4 00:06:26.109385 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 00:06:26.111154 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Sep 4 00:06:26.113267 systemd-logind[1486]: Removed session 4. Sep 4 00:06:26.166263 systemd[1]: Started sshd@4-10.128.0.81:22-147.75.109.163:41592.service - OpenSSH per-connection server daemon (147.75.109.163:41592). Sep 4 00:06:26.420742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 00:06:26.425299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:06:26.479619 sshd[1777]: Accepted publickey for core from 147.75.109.163 port 41592 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:06:26.482645 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:06:26.490966 systemd-logind[1486]: New session 5 of user core. Sep 4 00:06:26.501334 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 00:06:26.694287 sshd[1782]: Connection closed by 147.75.109.163 port 41592 Sep 4 00:06:26.695618 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Sep 4 00:06:26.701314 systemd[1]: sshd@4-10.128.0.81:22-147.75.109.163:41592.service: Deactivated successfully. Sep 4 00:06:26.703977 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 00:06:26.705488 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Sep 4 00:06:26.707786 systemd-logind[1486]: Removed session 5. Sep 4 00:06:26.760580 systemd[1]: Started sshd@5-10.128.0.81:22-147.75.109.163:41606.service - OpenSSH per-connection server daemon (147.75.109.163:41606). Sep 4 00:06:26.780530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:26.796267 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:06:26.859233 kubelet[1794]: E0904 00:06:26.859112 1794 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:06:26.864696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:06:26.864972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:06:26.865855 systemd[1]: kubelet.service: Consumed 238ms CPU time, 108.1M memory peak. Sep 4 00:06:27.078594 sshd[1792]: Accepted publickey for core from 147.75.109.163 port 41606 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:06:27.080280 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:06:27.089086 systemd-logind[1486]: New session 6 of user core. Sep 4 00:06:27.096371 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 00:06:27.294556 sshd[1802]: Connection closed by 147.75.109.163 port 41606 Sep 4 00:06:27.295626 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Sep 4 00:06:27.302516 systemd[1]: sshd@5-10.128.0.81:22-147.75.109.163:41606.service: Deactivated successfully. Sep 4 00:06:27.305402 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 00:06:27.306802 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Sep 4 00:06:27.309254 systemd-logind[1486]: Removed session 6. Sep 4 00:06:27.354892 systemd[1]: Started sshd@6-10.128.0.81:22-147.75.109.163:41612.service - OpenSSH per-connection server daemon (147.75.109.163:41612). Sep 4 00:06:27.674193 sshd[1808]: Accepted publickey for core from 147.75.109.163 port 41612 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:06:27.676306 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:06:27.684570 systemd-logind[1486]: New session 7 of user core. Sep 4 00:06:27.692281 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 00:06:27.872206 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 00:06:27.872727 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:06:27.886667 sudo[1811]: pam_unix(sudo:session): session closed for user root Sep 4 00:06:27.930651 sshd[1810]: Connection closed by 147.75.109.163 port 41612 Sep 4 00:06:27.932090 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Sep 4 00:06:27.940135 systemd[1]: sshd@6-10.128.0.81:22-147.75.109.163:41612.service: Deactivated successfully. Sep 4 00:06:27.943655 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 00:06:27.945123 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Sep 4 00:06:27.947727 systemd-logind[1486]: Removed session 7. Sep 4 00:06:27.998616 systemd[1]: Started sshd@7-10.128.0.81:22-147.75.109.163:41614.service - OpenSSH per-connection server daemon (147.75.109.163:41614). Sep 4 00:06:28.317407 sshd[1817]: Accepted publickey for core from 147.75.109.163 port 41614 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:06:28.319681 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:06:28.327723 systemd-logind[1486]: New session 8 of user core. Sep 4 00:06:28.339424 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 00:06:28.500119 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 00:06:28.500652 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:06:28.509226 sudo[1821]: pam_unix(sudo:session): session closed for user root Sep 4 00:06:28.525134 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 00:06:28.525640 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:06:28.540476 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 00:06:28.608662 augenrules[1843]: No rules Sep 4 00:06:28.611211 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 00:06:28.611605 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 00:06:28.613401 sudo[1820]: pam_unix(sudo:session): session closed for user root Sep 4 00:06:28.657202 sshd[1819]: Connection closed by 147.75.109.163 port 41614 Sep 4 00:06:28.657907 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Sep 4 00:06:28.664357 systemd[1]: sshd@7-10.128.0.81:22-147.75.109.163:41614.service: Deactivated successfully. Sep 4 00:06:28.667261 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 00:06:28.668698 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Sep 4 00:06:28.672376 systemd-logind[1486]: Removed session 8. Sep 4 00:06:28.716989 systemd[1]: Started sshd@8-10.128.0.81:22-147.75.109.163:41628.service - OpenSSH per-connection server daemon (147.75.109.163:41628). Sep 4 00:06:29.027497 sshd[1852]: Accepted publickey for core from 147.75.109.163 port 41628 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:06:29.029772 sshd-session[1852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:06:29.038855 systemd-logind[1486]: New session 9 of user core. Sep 4 00:06:29.045320 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 00:06:29.211575 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 00:06:29.212134 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 00:06:29.739433 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 00:06:29.755937 (dockerd)[1873]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 00:06:30.093487 dockerd[1873]: time="2025-09-04T00:06:30.093089928Z" level=info msg="Starting up" Sep 4 00:06:30.096035 dockerd[1873]: time="2025-09-04T00:06:30.095875267Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 00:06:30.183789 dockerd[1873]: time="2025-09-04T00:06:30.183335716Z" level=info msg="Loading containers: start." Sep 4 00:06:30.204036 kernel: Initializing XFRM netlink socket Sep 4 00:06:30.553459 systemd-networkd[1440]: docker0: Link UP Sep 4 00:06:30.560145 dockerd[1873]: time="2025-09-04T00:06:30.560084132Z" level=info msg="Loading containers: done." Sep 4 00:06:30.579687 dockerd[1873]: time="2025-09-04T00:06:30.579621241Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 00:06:30.579923 dockerd[1873]: time="2025-09-04T00:06:30.579827970Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 4 00:06:30.580089 dockerd[1873]: time="2025-09-04T00:06:30.580045722Z" level=info msg="Initializing buildkit" Sep 4 00:06:30.616366 dockerd[1873]: time="2025-09-04T00:06:30.616297301Z" level=info msg="Completed buildkit initialization" Sep 4 00:06:30.621368 dockerd[1873]: time="2025-09-04T00:06:30.621291102Z" level=info msg="Daemon has completed initialization" Sep 4 00:06:30.623356 dockerd[1873]: time="2025-09-04T00:06:30.621570295Z" level=info msg="API listen on /run/docker.sock" Sep 4 00:06:30.621619 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 00:06:31.515795 containerd[1569]: time="2025-09-04T00:06:31.515740860Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 00:06:32.125917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3931860566.mount: Deactivated successfully. Sep 4 00:06:33.893462 containerd[1569]: time="2025-09-04T00:06:33.893320909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:33.895147 containerd[1569]: time="2025-09-04T00:06:33.894802530Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28807315" Sep 4 00:06:33.896308 containerd[1569]: time="2025-09-04T00:06:33.896261384Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:33.899764 containerd[1569]: time="2025-09-04T00:06:33.899720736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:33.901066 containerd[1569]: time="2025-09-04T00:06:33.901020027Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.385201648s" Sep 4 00:06:33.901176 containerd[1569]: time="2025-09-04T00:06:33.901078941Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 00:06:33.902250 containerd[1569]: time="2025-09-04T00:06:33.902193980Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 00:06:35.450125 containerd[1569]: time="2025-09-04T00:06:35.450040114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:35.451868 containerd[1569]: time="2025-09-04T00:06:35.451722704Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24786062" Sep 4 00:06:35.453458 containerd[1569]: time="2025-09-04T00:06:35.453408157Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:35.457837 containerd[1569]: time="2025-09-04T00:06:35.457284741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:35.458621 containerd[1569]: time="2025-09-04T00:06:35.458573159Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.556332069s" Sep 4 00:06:35.458747 containerd[1569]: time="2025-09-04T00:06:35.458630145Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 00:06:35.459395 containerd[1569]: time="2025-09-04T00:06:35.459359793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 00:06:36.712051 containerd[1569]: time="2025-09-04T00:06:36.711969148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:36.713820 containerd[1569]: time="2025-09-04T00:06:36.713555744Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19176952" Sep 4 00:06:36.714967 containerd[1569]: time="2025-09-04T00:06:36.714923424Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:36.718981 containerd[1569]: time="2025-09-04T00:06:36.718935376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:36.720554 containerd[1569]: time="2025-09-04T00:06:36.720347252Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.26093776s" Sep 4 00:06:36.720554 containerd[1569]: time="2025-09-04T00:06:36.720402095Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 00:06:36.721911 containerd[1569]: time="2025-09-04T00:06:36.721866156Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 00:06:37.115588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 00:06:37.118667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:06:37.649253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:37.660676 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 00:06:37.756031 kubelet[2148]: E0904 00:06:37.755297 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 00:06:37.759742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 00:06:37.761114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 00:06:37.762079 systemd[1]: kubelet.service: Consumed 254ms CPU time, 108.8M memory peak. Sep 4 00:06:38.237035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241112370.mount: Deactivated successfully. Sep 4 00:06:38.964446 containerd[1569]: time="2025-09-04T00:06:38.964361417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:38.966070 containerd[1569]: time="2025-09-04T00:06:38.965756341Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30899065" Sep 4 00:06:38.967315 containerd[1569]: time="2025-09-04T00:06:38.967259446Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:38.969895 containerd[1569]: time="2025-09-04T00:06:38.969852311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:38.971096 containerd[1569]: time="2025-09-04T00:06:38.970866365Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 2.248956974s" Sep 4 00:06:38.971096 containerd[1569]: time="2025-09-04T00:06:38.970912123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 00:06:38.971507 containerd[1569]: time="2025-09-04T00:06:38.971465822Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 00:06:39.517263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2712447306.mount: Deactivated successfully. Sep 4 00:06:40.806080 containerd[1569]: time="2025-09-04T00:06:40.805992548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:40.807850 containerd[1569]: time="2025-09-04T00:06:40.807803324Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Sep 4 00:06:40.809929 containerd[1569]: time="2025-09-04T00:06:40.809103961Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:40.814758 containerd[1569]: time="2025-09-04T00:06:40.814696559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:40.816119 containerd[1569]: time="2025-09-04T00:06:40.816049453Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.844538232s" Sep 4 00:06:40.816119 containerd[1569]: time="2025-09-04T00:06:40.816103448Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 00:06:40.817051 containerd[1569]: time="2025-09-04T00:06:40.816741587Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 00:06:41.184812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322616133.mount: Deactivated successfully. Sep 4 00:06:41.194133 containerd[1569]: time="2025-09-04T00:06:41.194056663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:06:41.195565 containerd[1569]: time="2025-09-04T00:06:41.195512889Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 4 00:06:41.197071 containerd[1569]: time="2025-09-04T00:06:41.197025333Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:06:41.202035 containerd[1569]: time="2025-09-04T00:06:41.201956260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 00:06:41.202991 containerd[1569]: time="2025-09-04T00:06:41.202860641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 386.073085ms" Sep 4 00:06:41.202991 containerd[1569]: time="2025-09-04T00:06:41.202908594Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 00:06:41.203799 containerd[1569]: time="2025-09-04T00:06:41.203734369Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 00:06:41.617595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2093565510.mount: Deactivated successfully. Sep 4 00:06:43.855556 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 00:06:44.080163 containerd[1569]: time="2025-09-04T00:06:44.080062449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:44.081620 containerd[1569]: time="2025-09-04T00:06:44.081576696Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57689565" Sep 4 00:06:44.083679 containerd[1569]: time="2025-09-04T00:06:44.083609134Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:44.087762 containerd[1569]: time="2025-09-04T00:06:44.087685661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:06:44.090138 containerd[1569]: time="2025-09-04T00:06:44.089547545Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.885771476s" Sep 4 00:06:44.090138 containerd[1569]: time="2025-09-04T00:06:44.089889014Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 00:06:47.673641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:47.674099 systemd[1]: kubelet.service: Consumed 254ms CPU time, 108.8M memory peak. Sep 4 00:06:47.678292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:06:47.741861 systemd[1]: Reload requested from client PID 2300 ('systemctl') (unit session-9.scope)... Sep 4 00:06:47.741907 systemd[1]: Reloading... Sep 4 00:06:47.963050 zram_generator::config[2350]: No configuration found. Sep 4 00:06:48.088127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:06:48.305029 systemd[1]: Reloading finished in 562 ms. Sep 4 00:06:48.381636 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 00:06:48.381825 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 00:06:48.382378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:48.382457 systemd[1]: kubelet.service: Consumed 197ms CPU time, 98.3M memory peak. Sep 4 00:06:48.384993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:06:48.736769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:48.748699 (kubelet)[2395]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:06:48.812060 kubelet[2395]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:06:48.812060 kubelet[2395]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:06:48.812060 kubelet[2395]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:06:48.812060 kubelet[2395]: I0904 00:06:48.811943 2395 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:06:50.042863 kubelet[2395]: I0904 00:06:50.042801 2395 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 00:06:50.042863 kubelet[2395]: I0904 00:06:50.042837 2395 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:06:50.043508 kubelet[2395]: I0904 00:06:50.043319 2395 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 00:06:50.085900 kubelet[2395]: E0904 00:06:50.085823 2395 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:50.086588 kubelet[2395]: I0904 00:06:50.086403 2395 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:06:50.104623 kubelet[2395]: I0904 00:06:50.104590 2395 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:06:50.109747 kubelet[2395]: I0904 00:06:50.109704 2395 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:06:50.112374 kubelet[2395]: I0904 00:06:50.112308 2395 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:06:50.112629 kubelet[2395]: I0904 00:06:50.112356 2395 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:06:50.112839 kubelet[2395]: I0904 00:06:50.112650 2395 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:06:50.112839 kubelet[2395]: I0904 00:06:50.112678 2395 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 00:06:50.112950 kubelet[2395]: I0904 00:06:50.112909 2395 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:06:50.121022 kubelet[2395]: I0904 00:06:50.120937 2395 kubelet.go:446] "Attempting to sync node with API server" Sep 4 00:06:50.121022 kubelet[2395]: I0904 00:06:50.121019 2395 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:06:50.122239 kubelet[2395]: I0904 00:06:50.121063 2395 kubelet.go:352] "Adding apiserver pod source" Sep 4 00:06:50.122239 kubelet[2395]: I0904 00:06:50.121081 2395 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:06:50.130798 kubelet[2395]: W0904 00:06:50.130128 2395 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 4 00:06:50.130798 kubelet[2395]: E0904 00:06:50.130232 2395 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:50.130798 kubelet[2395]: W0904 00:06:50.130708 2395 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532&limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 4 00:06:50.130798 kubelet[2395]: E0904 00:06:50.130749 2395 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532&limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:50.131548 kubelet[2395]: I0904 00:06:50.131514 2395 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:06:50.132223 kubelet[2395]: I0904 00:06:50.132190 2395 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:06:50.135093 kubelet[2395]: W0904 00:06:50.135046 2395 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 00:06:50.138315 kubelet[2395]: I0904 00:06:50.138242 2395 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:06:50.138530 kubelet[2395]: I0904 00:06:50.138333 2395 server.go:1287] "Started kubelet" Sep 4 00:06:50.139160 kubelet[2395]: I0904 00:06:50.138643 2395 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:06:50.140147 kubelet[2395]: I0904 00:06:50.140073 2395 server.go:479] "Adding debug handlers to kubelet server" Sep 4 00:06:50.145073 kubelet[2395]: I0904 00:06:50.143639 2395 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:06:50.145073 kubelet[2395]: I0904 00:06:50.143940 2395 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:06:50.145073 kubelet[2395]: I0904 00:06:50.144297 2395 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:06:50.148280 kubelet[2395]: E0904 00:06:50.145771 2395 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532.1861eba1e066950f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532,UID:ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532,},FirstTimestamp:2025-09-04 00:06:50.138285327 +0000 UTC m=+1.382921433,LastTimestamp:2025-09-04 00:06:50.138285327 +0000 UTC m=+1.382921433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532,}" Sep 4 00:06:50.153936 kubelet[2395]: I0904 00:06:50.153886 2395 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:06:50.154674 kubelet[2395]: I0904 00:06:50.154644 2395 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:06:50.157448 kubelet[2395]: I0904 00:06:50.157416 2395 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:06:50.157648 kubelet[2395]: E0904 00:06:50.154150 2395 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" Sep 4 00:06:50.157857 kubelet[2395]: I0904 00:06:50.157839 2395 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:06:50.159259 kubelet[2395]: W0904 00:06:50.159199 2395 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 4 00:06:50.159667 kubelet[2395]: E0904 00:06:50.159636 2395 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:50.160161 kubelet[2395]: E0904 00:06:50.160103 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532?timeout=10s\": dial tcp 10.128.0.81:6443: connect: connection refused" interval="200ms" Sep 4 00:06:50.160810 kubelet[2395]: E0904 00:06:50.160771 2395 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:06:50.161331 kubelet[2395]: I0904 00:06:50.161304 2395 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:06:50.161590 kubelet[2395]: I0904 00:06:50.161563 2395 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:06:50.163450 kubelet[2395]: I0904 00:06:50.163427 2395 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:06:50.185085 kubelet[2395]: I0904 00:06:50.184985 2395 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:06:50.188031 kubelet[2395]: I0904 00:06:50.187433 2395 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:06:50.188031 kubelet[2395]: I0904 00:06:50.187472 2395 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 00:06:50.188031 kubelet[2395]: I0904 00:06:50.187501 2395 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:06:50.188031 kubelet[2395]: I0904 00:06:50.187511 2395 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 00:06:50.188031 kubelet[2395]: E0904 00:06:50.187584 2395 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:06:50.199872 kubelet[2395]: W0904 00:06:50.199791 2395 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 4 00:06:50.200165 kubelet[2395]: E0904 00:06:50.200132 2395 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:50.201633 kubelet[2395]: I0904 00:06:50.201609 2395 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:06:50.201815 kubelet[2395]: I0904 00:06:50.201771 2395 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:06:50.201815 kubelet[2395]: I0904 00:06:50.201813 2395 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:06:50.204505 kubelet[2395]: I0904 00:06:50.204465 2395 policy_none.go:49] "None policy: Start" Sep 4 00:06:50.204505 kubelet[2395]: I0904 00:06:50.204496 2395 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:06:50.204699 kubelet[2395]: I0904 00:06:50.204516 2395 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:06:50.214097 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 00:06:50.234174 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 00:06:50.240993 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 00:06:50.252568 kubelet[2395]: I0904 00:06:50.252532 2395 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:06:50.253555 kubelet[2395]: I0904 00:06:50.253479 2395 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:06:50.253555 kubelet[2395]: I0904 00:06:50.253509 2395 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:06:50.254123 kubelet[2395]: I0904 00:06:50.254064 2395 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:06:50.257144 kubelet[2395]: E0904 00:06:50.257116 2395 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:06:50.257264 kubelet[2395]: E0904 00:06:50.257190 2395 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" Sep 4 00:06:50.316221 systemd[1]: Created slice kubepods-burstable-pod625dea1bcd784d9f8e4eacdef9d87803.slice - libcontainer container kubepods-burstable-pod625dea1bcd784d9f8e4eacdef9d87803.slice. Sep 4 00:06:50.330447 kubelet[2395]: E0904 00:06:50.330378 2395 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.334677 systemd[1]: Created slice kubepods-burstable-pod709e384e83e1368321257eed579fd138.slice - libcontainer container kubepods-burstable-pod709e384e83e1368321257eed579fd138.slice. Sep 4 00:06:50.347569 kubelet[2395]: E0904 00:06:50.347514 2395 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.352385 systemd[1]: Created slice kubepods-burstable-pod7a8138086488ed5a9133d9bd032941b9.slice - libcontainer container kubepods-burstable-pod7a8138086488ed5a9133d9bd032941b9.slice. Sep 4 00:06:50.356072 kubelet[2395]: E0904 00:06:50.355988 2395 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.358827 kubelet[2395]: I0904 00:06:50.358314 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/625dea1bcd784d9f8e4eacdef9d87803-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"625dea1bcd784d9f8e4eacdef9d87803\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.358827 kubelet[2395]: I0904 00:06:50.358381 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/709e384e83e1368321257eed579fd138-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"709e384e83e1368321257eed579fd138\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.358827 kubelet[2395]: I0904 00:06:50.358419 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/709e384e83e1368321257eed579fd138-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"709e384e83e1368321257eed579fd138\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.358827 kubelet[2395]: I0904 00:06:50.358448 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/709e384e83e1368321257eed579fd138-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"709e384e83e1368321257eed579fd138\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.359210 kubelet[2395]: I0904 00:06:50.358481 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a8138086488ed5a9133d9bd032941b9-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"7a8138086488ed5a9133d9bd032941b9\") " pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.359210 kubelet[2395]: I0904 00:06:50.358521 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/625dea1bcd784d9f8e4eacdef9d87803-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"625dea1bcd784d9f8e4eacdef9d87803\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.359210 kubelet[2395]: I0904 00:06:50.358600 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/625dea1bcd784d9f8e4eacdef9d87803-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"625dea1bcd784d9f8e4eacdef9d87803\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.359210 kubelet[2395]: I0904 00:06:50.358653 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/709e384e83e1368321257eed579fd138-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"709e384e83e1368321257eed579fd138\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.359410 kubelet[2395]: I0904 00:06:50.358693 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/709e384e83e1368321257eed579fd138-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"709e384e83e1368321257eed579fd138\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.359828 kubelet[2395]: I0904 00:06:50.359795 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.360491 kubelet[2395]: E0904 00:06:50.360441 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.81:6443/api/v1/nodes\": dial tcp 10.128.0.81:6443: connect: connection refused" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.360937 kubelet[2395]: E0904 00:06:50.360889 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532?timeout=10s\": dial tcp 10.128.0.81:6443: connect: connection refused" interval="400ms" Sep 4 00:06:50.565926 kubelet[2395]: I0904 00:06:50.565873 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.566829 kubelet[2395]: E0904 00:06:50.566653 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.81:6443/api/v1/nodes\": dial tcp 10.128.0.81:6443: connect: connection refused" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.632970 containerd[1569]: time="2025-09-04T00:06:50.632901180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532,Uid:625dea1bcd784d9f8e4eacdef9d87803,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:50.650052 containerd[1569]: time="2025-09-04T00:06:50.649954986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532,Uid:709e384e83e1368321257eed579fd138,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:50.676511 containerd[1569]: time="2025-09-04T00:06:50.676090398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532,Uid:7a8138086488ed5a9133d9bd032941b9,Namespace:kube-system,Attempt:0,}" Sep 4 00:06:50.680870 containerd[1569]: time="2025-09-04T00:06:50.680757310Z" level=info msg="connecting to shim aca662a237289abb35e98c445fb41c0db6765fd98ff33e3767a2c3e8fb180e88" address="unix:///run/containerd/s/ac6b1023877d80c8e02b13d03cd7677fb3c2150c6a7509a76def56556a587e83" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:50.728058 containerd[1569]: time="2025-09-04T00:06:50.725349577Z" level=info msg="connecting to shim 6375b4220d7e25fa30aa74bccf66824ba5289e9ff8b9e0632dbb26b153eedcb2" address="unix:///run/containerd/s/2455fcfea3b0a1ac1dd311d3560914a4c35e82b901b559ff44c9b15f7eb5e4a6" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:50.754219 containerd[1569]: time="2025-09-04T00:06:50.754125394Z" level=info msg="connecting to shim 5b6f0371ca1318fe993ccd7a5975d22133cbc985fc5f5c53ac8d7c50b2236a80" address="unix:///run/containerd/s/1cadb53bb69999bb0c37deb30e9cdfd338f462026c3c456d30e490eeca2d9aa7" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:06:50.766051 kubelet[2395]: E0904 00:06:50.763770 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532?timeout=10s\": dial tcp 10.128.0.81:6443: connect: connection refused" interval="800ms" Sep 4 00:06:50.787706 systemd[1]: Started cri-containerd-aca662a237289abb35e98c445fb41c0db6765fd98ff33e3767a2c3e8fb180e88.scope - libcontainer container aca662a237289abb35e98c445fb41c0db6765fd98ff33e3767a2c3e8fb180e88. Sep 4 00:06:50.826312 systemd[1]: Started cri-containerd-6375b4220d7e25fa30aa74bccf66824ba5289e9ff8b9e0632dbb26b153eedcb2.scope - libcontainer container 6375b4220d7e25fa30aa74bccf66824ba5289e9ff8b9e0632dbb26b153eedcb2. Sep 4 00:06:50.851342 systemd[1]: Started cri-containerd-5b6f0371ca1318fe993ccd7a5975d22133cbc985fc5f5c53ac8d7c50b2236a80.scope - libcontainer container 5b6f0371ca1318fe993ccd7a5975d22133cbc985fc5f5c53ac8d7c50b2236a80. Sep 4 00:06:50.955059 containerd[1569]: time="2025-09-04T00:06:50.953413388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532,Uid:709e384e83e1368321257eed579fd138,Namespace:kube-system,Attempt:0,} returns sandbox id \"6375b4220d7e25fa30aa74bccf66824ba5289e9ff8b9e0632dbb26b153eedcb2\"" Sep 4 00:06:50.958808 kubelet[2395]: E0904 00:06:50.958741 2395 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5" Sep 4 00:06:50.963060 containerd[1569]: time="2025-09-04T00:06:50.961530239Z" level=info msg="CreateContainer within sandbox \"6375b4220d7e25fa30aa74bccf66824ba5289e9ff8b9e0632dbb26b153eedcb2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 00:06:50.978773 containerd[1569]: time="2025-09-04T00:06:50.978172906Z" level=info msg="Container acd0c0badbf6ea0ac0e62c53fc95855bafa18e33ba8d8cf10600ec07a712ed44: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:50.984019 kubelet[2395]: I0904 00:06:50.983958 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.985783 kubelet[2395]: E0904 00:06:50.985448 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.81:6443/api/v1/nodes\": dial tcp 10.128.0.81:6443: connect: connection refused" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:50.990587 containerd[1569]: time="2025-09-04T00:06:50.990499592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532,Uid:625dea1bcd784d9f8e4eacdef9d87803,Namespace:kube-system,Attempt:0,} returns sandbox id \"aca662a237289abb35e98c445fb41c0db6765fd98ff33e3767a2c3e8fb180e88\"" Sep 4 00:06:50.997295 kubelet[2395]: E0904 00:06:50.997208 2395 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32e" Sep 4 00:06:51.001042 containerd[1569]: time="2025-09-04T00:06:51.000969214Z" level=info msg="CreateContainer within sandbox \"6375b4220d7e25fa30aa74bccf66824ba5289e9ff8b9e0632dbb26b153eedcb2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"acd0c0badbf6ea0ac0e62c53fc95855bafa18e33ba8d8cf10600ec07a712ed44\"" Sep 4 00:06:51.002953 containerd[1569]: time="2025-09-04T00:06:51.002912267Z" level=info msg="StartContainer for \"acd0c0badbf6ea0ac0e62c53fc95855bafa18e33ba8d8cf10600ec07a712ed44\"" Sep 4 00:06:51.003711 containerd[1569]: time="2025-09-04T00:06:51.003552246Z" level=info msg="CreateContainer within sandbox \"aca662a237289abb35e98c445fb41c0db6765fd98ff33e3767a2c3e8fb180e88\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 00:06:51.008115 containerd[1569]: time="2025-09-04T00:06:51.007998692Z" level=info msg="connecting to shim acd0c0badbf6ea0ac0e62c53fc95855bafa18e33ba8d8cf10600ec07a712ed44" address="unix:///run/containerd/s/2455fcfea3b0a1ac1dd311d3560914a4c35e82b901b559ff44c9b15f7eb5e4a6" protocol=ttrpc version=3 Sep 4 00:06:51.020854 containerd[1569]: time="2025-09-04T00:06:51.019558621Z" level=info msg="Container ebb8304843ee7d1408ccff359e2073026dbd83b417927b85bf8484397c2471ab: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:51.024113 containerd[1569]: time="2025-09-04T00:06:51.023049312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532,Uid:7a8138086488ed5a9133d9bd032941b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b6f0371ca1318fe993ccd7a5975d22133cbc985fc5f5c53ac8d7c50b2236a80\"" Sep 4 00:06:51.026784 kubelet[2395]: E0904 00:06:51.026735 2395 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32e" Sep 4 00:06:51.030031 containerd[1569]: time="2025-09-04T00:06:51.029966419Z" level=info msg="CreateContainer within sandbox \"5b6f0371ca1318fe993ccd7a5975d22133cbc985fc5f5c53ac8d7c50b2236a80\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 00:06:51.033186 containerd[1569]: time="2025-09-04T00:06:51.033144987Z" level=info msg="CreateContainer within sandbox \"aca662a237289abb35e98c445fb41c0db6765fd98ff33e3767a2c3e8fb180e88\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ebb8304843ee7d1408ccff359e2073026dbd83b417927b85bf8484397c2471ab\"" Sep 4 00:06:51.041192 containerd[1569]: time="2025-09-04T00:06:51.041138876Z" level=info msg="Container 822ece27052f8aeee5df0ff5db6b2e55062034d137e03ee96b136eae81e7b17d: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:06:51.050244 containerd[1569]: time="2025-09-04T00:06:51.050171353Z" level=info msg="StartContainer for \"ebb8304843ee7d1408ccff359e2073026dbd83b417927b85bf8484397c2471ab\"" Sep 4 00:06:51.052984 containerd[1569]: time="2025-09-04T00:06:51.052933722Z" level=info msg="connecting to shim ebb8304843ee7d1408ccff359e2073026dbd83b417927b85bf8484397c2471ab" address="unix:///run/containerd/s/ac6b1023877d80c8e02b13d03cd7677fb3c2150c6a7509a76def56556a587e83" protocol=ttrpc version=3 Sep 4 00:06:51.059270 systemd[1]: Started cri-containerd-acd0c0badbf6ea0ac0e62c53fc95855bafa18e33ba8d8cf10600ec07a712ed44.scope - libcontainer container acd0c0badbf6ea0ac0e62c53fc95855bafa18e33ba8d8cf10600ec07a712ed44. Sep 4 00:06:51.082606 containerd[1569]: time="2025-09-04T00:06:51.079721249Z" level=info msg="CreateContainer within sandbox \"5b6f0371ca1318fe993ccd7a5975d22133cbc985fc5f5c53ac8d7c50b2236a80\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"822ece27052f8aeee5df0ff5db6b2e55062034d137e03ee96b136eae81e7b17d\"" Sep 4 00:06:51.091413 containerd[1569]: time="2025-09-04T00:06:51.091331089Z" level=info msg="StartContainer for \"822ece27052f8aeee5df0ff5db6b2e55062034d137e03ee96b136eae81e7b17d\"" Sep 4 00:06:51.101380 containerd[1569]: time="2025-09-04T00:06:51.100670643Z" level=info msg="connecting to shim 822ece27052f8aeee5df0ff5db6b2e55062034d137e03ee96b136eae81e7b17d" address="unix:///run/containerd/s/1cadb53bb69999bb0c37deb30e9cdfd338f462026c3c456d30e490eeca2d9aa7" protocol=ttrpc version=3 Sep 4 00:06:51.112457 systemd[1]: Started cri-containerd-ebb8304843ee7d1408ccff359e2073026dbd83b417927b85bf8484397c2471ab.scope - libcontainer container ebb8304843ee7d1408ccff359e2073026dbd83b417927b85bf8484397c2471ab. Sep 4 00:06:51.155340 systemd[1]: Started cri-containerd-822ece27052f8aeee5df0ff5db6b2e55062034d137e03ee96b136eae81e7b17d.scope - libcontainer container 822ece27052f8aeee5df0ff5db6b2e55062034d137e03ee96b136eae81e7b17d. Sep 4 00:06:51.198460 kubelet[2395]: W0904 00:06:51.198261 2395 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532&limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 4 00:06:51.200760 kubelet[2395]: E0904 00:06:51.199995 2395 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532&limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:51.258251 containerd[1569]: time="2025-09-04T00:06:51.258172402Z" level=info msg="StartContainer for \"acd0c0badbf6ea0ac0e62c53fc95855bafa18e33ba8d8cf10600ec07a712ed44\" returns successfully" Sep 4 00:06:51.327634 containerd[1569]: time="2025-09-04T00:06:51.327473833Z" level=info msg="StartContainer for \"822ece27052f8aeee5df0ff5db6b2e55062034d137e03ee96b136eae81e7b17d\" returns successfully" Sep 4 00:06:51.337575 containerd[1569]: time="2025-09-04T00:06:51.337432115Z" level=info msg="StartContainer for \"ebb8304843ee7d1408ccff359e2073026dbd83b417927b85bf8484397c2471ab\" returns successfully" Sep 4 00:06:51.388352 kubelet[2395]: W0904 00:06:51.388180 2395 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 4 00:06:51.388352 kubelet[2395]: E0904 00:06:51.388312 2395 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:51.487517 kubelet[2395]: W0904 00:06:51.487386 2395 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.81:6443: connect: connection refused Sep 4 00:06:51.487517 kubelet[2395]: E0904 00:06:51.487477 2395 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.81:6443: connect: connection refused" logger="UnhandledError" Sep 4 00:06:51.795454 kubelet[2395]: I0904 00:06:51.794446 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:52.253380 kubelet[2395]: E0904 00:06:52.253241 2395 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:52.259742 kubelet[2395]: E0904 00:06:52.259700 2395 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:52.264636 kubelet[2395]: E0904 00:06:52.264596 2395 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:53.266897 kubelet[2395]: E0904 00:06:53.266848 2395 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:53.270037 kubelet[2395]: E0904 00:06:53.269845 2395 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:53.272786 kubelet[2395]: E0904 00:06:53.272739 2395 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:54.260993 kubelet[2395]: E0904 00:06:54.260939 2395 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:54.269041 kubelet[2395]: E0904 00:06:54.268462 2395 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:54.269041 kubelet[2395]: E0904 00:06:54.268945 2395 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:54.314400 kubelet[2395]: I0904 00:06:54.314065 2395 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:54.355209 kubelet[2395]: I0904 00:06:54.355160 2395 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:54.406701 kubelet[2395]: E0904 00:06:54.406316 2395 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:54.406701 kubelet[2395]: I0904 00:06:54.406373 2395 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:54.415032 kubelet[2395]: E0904 00:06:54.414854 2395 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:54.415032 kubelet[2395]: I0904 00:06:54.414909 2395 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:54.423035 kubelet[2395]: E0904 00:06:54.420920 2395 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:55.128394 kubelet[2395]: I0904 00:06:55.128306 2395 apiserver.go:52] "Watching apiserver" Sep 4 00:06:55.158768 kubelet[2395]: I0904 00:06:55.158702 2395 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 00:06:56.236896 systemd[1]: Reload requested from client PID 2662 ('systemctl') (unit session-9.scope)... Sep 4 00:06:56.236922 systemd[1]: Reloading... Sep 4 00:06:56.422055 zram_generator::config[2706]: No configuration found. Sep 4 00:06:56.563050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 00:06:56.748593 update_engine[1487]: I20250904 00:06:56.748082 1487 update_attempter.cc:509] Updating boot flags... Sep 4 00:06:56.779588 systemd[1]: Reloading finished in 541 ms. Sep 4 00:06:56.831892 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:06:56.867778 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 00:06:56.869590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:56.869970 systemd[1]: kubelet.service: Consumed 1.953s CPU time, 131.3M memory peak. Sep 4 00:06:56.879936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 00:06:57.370750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 00:06:57.389330 (kubelet)[2774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 00:06:57.480839 kubelet[2774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:06:57.482593 kubelet[2774]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 00:06:57.482593 kubelet[2774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 00:06:57.482593 kubelet[2774]: I0904 00:06:57.481421 2774 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 00:06:57.497373 kubelet[2774]: I0904 00:06:57.497318 2774 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 00:06:57.497858 kubelet[2774]: I0904 00:06:57.497577 2774 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 00:06:57.498514 kubelet[2774]: I0904 00:06:57.498482 2774 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 00:06:57.501220 kubelet[2774]: I0904 00:06:57.501187 2774 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 00:06:57.517044 kubelet[2774]: I0904 00:06:57.516065 2774 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 00:06:57.528962 kubelet[2774]: I0904 00:06:57.528920 2774 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 00:06:57.529464 sudo[2788]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 00:06:57.530080 sudo[2788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 00:06:57.538090 kubelet[2774]: I0904 00:06:57.537744 2774 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 00:06:57.538710 kubelet[2774]: I0904 00:06:57.538631 2774 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 00:06:57.539022 kubelet[2774]: I0904 00:06:57.538687 2774 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 00:06:57.539235 kubelet[2774]: I0904 00:06:57.539043 2774 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 00:06:57.539235 kubelet[2774]: I0904 00:06:57.539065 2774 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 00:06:57.539235 kubelet[2774]: I0904 00:06:57.539155 2774 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:06:57.540146 kubelet[2774]: I0904 00:06:57.539419 2774 kubelet.go:446] "Attempting to sync node with API server" Sep 4 00:06:57.540146 kubelet[2774]: I0904 00:06:57.539454 2774 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 00:06:57.540146 kubelet[2774]: I0904 00:06:57.539505 2774 kubelet.go:352] "Adding apiserver pod source" Sep 4 00:06:57.540146 kubelet[2774]: I0904 00:06:57.539541 2774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 00:06:57.545038 kubelet[2774]: I0904 00:06:57.544173 2774 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 4 00:06:57.549112 kubelet[2774]: I0904 00:06:57.549061 2774 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 00:06:57.553299 kubelet[2774]: I0904 00:06:57.552384 2774 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 00:06:57.553299 kubelet[2774]: I0904 00:06:57.552454 2774 server.go:1287] "Started kubelet" Sep 4 00:06:57.562233 kubelet[2774]: I0904 00:06:57.562185 2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 00:06:57.569280 kubelet[2774]: I0904 00:06:57.569207 2774 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 00:06:57.576074 kubelet[2774]: I0904 00:06:57.573112 2774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 00:06:57.578449 kubelet[2774]: I0904 00:06:57.576879 2774 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 00:06:57.587805 kubelet[2774]: I0904 00:06:57.579078 2774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 00:06:57.594049 kubelet[2774]: I0904 00:06:57.582115 2774 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 00:06:57.594049 kubelet[2774]: I0904 00:06:57.582136 2774 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 00:06:57.594049 kubelet[2774]: E0904 00:06:57.582405 2774 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" not found" Sep 4 00:06:57.594049 kubelet[2774]: I0904 00:06:57.584784 2774 server.go:479] "Adding debug handlers to kubelet server" Sep 4 00:06:57.594049 kubelet[2774]: I0904 00:06:57.591568 2774 reconciler.go:26] "Reconciler: start to sync state" Sep 4 00:06:57.600035 kubelet[2774]: I0904 00:06:57.598730 2774 factory.go:221] Registration of the systemd container factory successfully Sep 4 00:06:57.600483 kubelet[2774]: I0904 00:06:57.600446 2774 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 00:06:57.626666 kubelet[2774]: I0904 00:06:57.626512 2774 factory.go:221] Registration of the containerd container factory successfully Sep 4 00:06:57.656392 kubelet[2774]: E0904 00:06:57.656326 2774 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 00:06:57.679093 kubelet[2774]: I0904 00:06:57.678939 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 00:06:57.688732 kubelet[2774]: I0904 00:06:57.688668 2774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 00:06:57.688732 kubelet[2774]: I0904 00:06:57.688745 2774 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 00:06:57.689058 kubelet[2774]: I0904 00:06:57.688783 2774 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 00:06:57.689058 kubelet[2774]: I0904 00:06:57.688796 2774 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 00:06:57.689058 kubelet[2774]: E0904 00:06:57.688878 2774 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 00:06:57.784117 kubelet[2774]: I0904 00:06:57.784080 2774 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 00:06:57.784373 kubelet[2774]: I0904 00:06:57.784358 2774 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 00:06:57.784527 kubelet[2774]: I0904 00:06:57.784514 2774 state_mem.go:36] "Initialized new in-memory state store" Sep 4 00:06:57.785028 kubelet[2774]: I0904 00:06:57.784969 2774 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 00:06:57.785776 kubelet[2774]: I0904 00:06:57.785691 2774 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 00:06:57.785987 kubelet[2774]: I0904 00:06:57.785920 2774 policy_none.go:49] "None policy: Start" Sep 4 00:06:57.785987 kubelet[2774]: I0904 00:06:57.785943 2774 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 00:06:57.785987 kubelet[2774]: I0904 00:06:57.785966 2774 state_mem.go:35] "Initializing new in-memory state store" Sep 4 00:06:57.786792 kubelet[2774]: I0904 00:06:57.786662 2774 state_mem.go:75] "Updated machine memory state" Sep 4 00:06:57.789096 kubelet[2774]: E0904 00:06:57.789052 2774 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 00:06:57.800309 kubelet[2774]: I0904 00:06:57.800262 2774 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 00:06:57.801053 kubelet[2774]: I0904 00:06:57.800566 2774 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 00:06:57.801053 kubelet[2774]: I0904 00:06:57.800589 2774 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 00:06:57.810033 kubelet[2774]: I0904 00:06:57.808754 2774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 00:06:57.817676 kubelet[2774]: E0904 00:06:57.816368 2774 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 00:06:57.945888 kubelet[2774]: I0904 00:06:57.945617 2774 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:57.970753 kubelet[2774]: I0904 00:06:57.970544 2774 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:57.970753 kubelet[2774]: I0904 00:06:57.970671 2774 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:57.990885 kubelet[2774]: I0904 00:06:57.990395 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:57.992164 kubelet[2774]: I0904 00:06:57.991960 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:57.993717 kubelet[2774]: I0904 00:06:57.992564 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.015582 kubelet[2774]: W0904 00:06:58.015482 2774 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 4 00:06:58.024633 kubelet[2774]: W0904 00:06:58.024580 2774 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 4 00:06:58.026081 kubelet[2774]: W0904 00:06:58.025894 2774 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 4 00:06:58.098020 kubelet[2774]: I0904 00:06:58.097540 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a8138086488ed5a9133d9bd032941b9-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"7a8138086488ed5a9133d9bd032941b9\") " pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.098020 kubelet[2774]: I0904 00:06:58.097638 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/625dea1bcd784d9f8e4eacdef9d87803-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"625dea1bcd784d9f8e4eacdef9d87803\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.098020 kubelet[2774]: I0904 00:06:58.097679 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/625dea1bcd784d9f8e4eacdef9d87803-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"625dea1bcd784d9f8e4eacdef9d87803\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.098020 kubelet[2774]: I0904 00:06:58.097721 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/709e384e83e1368321257eed579fd138-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"709e384e83e1368321257eed579fd138\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.098466 kubelet[2774]: I0904 00:06:58.097756 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/709e384e83e1368321257eed579fd138-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"709e384e83e1368321257eed579fd138\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.098466 kubelet[2774]: I0904 00:06:58.097792 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/625dea1bcd784d9f8e4eacdef9d87803-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"625dea1bcd784d9f8e4eacdef9d87803\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.098466 kubelet[2774]: I0904 00:06:58.097823 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/709e384e83e1368321257eed579fd138-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"709e384e83e1368321257eed579fd138\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.098466 kubelet[2774]: I0904 00:06:58.097863 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/709e384e83e1368321257eed579fd138-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"709e384e83e1368321257eed579fd138\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.098667 kubelet[2774]: I0904 00:06:58.097896 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/709e384e83e1368321257eed579fd138-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" (UID: \"709e384e83e1368321257eed579fd138\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.506549 sudo[2788]: pam_unix(sudo:session): session closed for user root Sep 4 00:06:58.558906 kubelet[2774]: I0904 00:06:58.558822 2774 apiserver.go:52] "Watching apiserver" Sep 4 00:06:58.590109 kubelet[2774]: I0904 00:06:58.590031 2774 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 00:06:58.747741 kubelet[2774]: I0904 00:06:58.747658 2774 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.762519 kubelet[2774]: W0904 00:06:58.761315 2774 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 4 00:06:58.764145 kubelet[2774]: E0904 00:06:58.762801 2774 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" already exists" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" Sep 4 00:06:58.806042 kubelet[2774]: I0904 00:06:58.804630 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" podStartSLOduration=0.804599773 podStartE2EDuration="804.599773ms" podCreationTimestamp="2025-09-04 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:58.789606378 +0000 UTC m=+1.389766689" watchObservedRunningTime="2025-09-04 00:06:58.804599773 +0000 UTC m=+1.404760051" Sep 4 00:06:58.823703 kubelet[2774]: I0904 00:06:58.823614 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" podStartSLOduration=0.823590192 podStartE2EDuration="823.590192ms" podCreationTimestamp="2025-09-04 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:58.807250859 +0000 UTC m=+1.407411141" watchObservedRunningTime="2025-09-04 00:06:58.823590192 +0000 UTC m=+1.423750471" Sep 4 00:06:58.840284 kubelet[2774]: I0904 00:06:58.840060 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" podStartSLOduration=0.840031908 podStartE2EDuration="840.031908ms" podCreationTimestamp="2025-09-04 00:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:06:58.824861733 +0000 UTC m=+1.425022006" watchObservedRunningTime="2025-09-04 00:06:58.840031908 +0000 UTC m=+1.440192191" Sep 4 00:07:00.709506 sudo[1855]: pam_unix(sudo:session): session closed for user root Sep 4 00:07:00.753274 sshd[1854]: Connection closed by 147.75.109.163 port 41628 Sep 4 00:07:00.754425 sshd-session[1852]: pam_unix(sshd:session): session closed for user core Sep 4 00:07:00.761244 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Sep 4 00:07:00.762356 systemd[1]: sshd@8-10.128.0.81:22-147.75.109.163:41628.service: Deactivated successfully. Sep 4 00:07:00.766490 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 00:07:00.767332 systemd[1]: session-9.scope: Consumed 6.893s CPU time, 270.8M memory peak. Sep 4 00:07:00.772476 systemd-logind[1486]: Removed session 9. Sep 4 00:07:02.496197 kubelet[2774]: I0904 00:07:02.496119 2774 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 00:07:02.497289 containerd[1569]: time="2025-09-04T00:07:02.497178996Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 00:07:02.498388 kubelet[2774]: I0904 00:07:02.497927 2774 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 00:07:03.150580 systemd[1]: Created slice kubepods-besteffort-pod722b9a4b_5a4c_4c58_8583_f45b0973d530.slice - libcontainer container kubepods-besteffort-pod722b9a4b_5a4c_4c58_8583_f45b0973d530.slice. Sep 4 00:07:03.171489 systemd[1]: Created slice kubepods-burstable-pod20f07355_29b6_4076_83e0_c543cdd328b4.slice - libcontainer container kubepods-burstable-pod20f07355_29b6_4076_83e0_c543cdd328b4.slice. Sep 4 00:07:03.230131 kubelet[2774]: I0904 00:07:03.230048 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/722b9a4b-5a4c-4c58-8583-f45b0973d530-xtables-lock\") pod \"kube-proxy-cf674\" (UID: \"722b9a4b-5a4c-4c58-8583-f45b0973d530\") " pod="kube-system/kube-proxy-cf674" Sep 4 00:07:03.230543 kubelet[2774]: I0904 00:07:03.230486 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/722b9a4b-5a4c-4c58-8583-f45b0973d530-lib-modules\") pod \"kube-proxy-cf674\" (UID: \"722b9a4b-5a4c-4c58-8583-f45b0973d530\") " pod="kube-system/kube-proxy-cf674" Sep 4 00:07:03.230806 kubelet[2774]: I0904 00:07:03.230780 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9krr\" (UniqueName: \"kubernetes.io/projected/722b9a4b-5a4c-4c58-8583-f45b0973d530-kube-api-access-x9krr\") pod \"kube-proxy-cf674\" (UID: \"722b9a4b-5a4c-4c58-8583-f45b0973d530\") " pod="kube-system/kube-proxy-cf674" Sep 4 00:07:03.230967 kubelet[2774]: I0904 00:07:03.230949 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-run\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.231173 kubelet[2774]: I0904 00:07:03.231154 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-config-path\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.231422 kubelet[2774]: I0904 00:07:03.231387 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-hostproc\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.231676 kubelet[2774]: I0904 00:07:03.231619 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cni-path\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.231808 kubelet[2774]: I0904 00:07:03.231789 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20f07355-29b6-4076-83e0-c543cdd328b4-clustermesh-secrets\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.232088 kubelet[2774]: I0904 00:07:03.232037 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8fsb\" (UniqueName: \"kubernetes.io/projected/20f07355-29b6-4076-83e0-c543cdd328b4-kube-api-access-b8fsb\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.232279 kubelet[2774]: I0904 00:07:03.232228 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-cgroup\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.232452 kubelet[2774]: I0904 00:07:03.232432 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/722b9a4b-5a4c-4c58-8583-f45b0973d530-kube-proxy\") pod \"kube-proxy-cf674\" (UID: \"722b9a4b-5a4c-4c58-8583-f45b0973d530\") " pod="kube-system/kube-proxy-cf674" Sep 4 00:07:03.232682 kubelet[2774]: I0904 00:07:03.232606 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-host-proc-sys-net\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.232876 kubelet[2774]: I0904 00:07:03.232818 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-etc-cni-netd\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.233077 kubelet[2774]: I0904 00:07:03.232990 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-bpf-maps\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.233252 kubelet[2774]: I0904 00:07:03.233205 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-lib-modules\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.233464 kubelet[2774]: I0904 00:07:03.233413 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-xtables-lock\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.233644 kubelet[2774]: I0904 00:07:03.233611 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-host-proc-sys-kernel\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.233888 kubelet[2774]: I0904 00:07:03.233835 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20f07355-29b6-4076-83e0-c543cdd328b4-hubble-tls\") pod \"cilium-g84gp\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " pod="kube-system/cilium-g84gp" Sep 4 00:07:03.468067 containerd[1569]: time="2025-09-04T00:07:03.467844518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cf674,Uid:722b9a4b-5a4c-4c58-8583-f45b0973d530,Namespace:kube-system,Attempt:0,}" Sep 4 00:07:03.482502 containerd[1569]: time="2025-09-04T00:07:03.482433228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g84gp,Uid:20f07355-29b6-4076-83e0-c543cdd328b4,Namespace:kube-system,Attempt:0,}" Sep 4 00:07:03.522169 containerd[1569]: time="2025-09-04T00:07:03.521814434Z" level=info msg="connecting to shim 7f9c7f8ea5da7c69301ea70918b7da4662b6f96a2472a0fb510830d3247fa409" address="unix:///run/containerd/s/af1de0e107e5523ac0858ee636317bbd629e345a3406c5749c7e63937d72d1f6" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:03.571912 kubelet[2774]: I0904 00:07:03.570421 2774 status_manager.go:890] "Failed to get status for pod" podUID="acfc7f93-aa2e-4886-ba84-59e875a7a960" pod="kube-system/cilium-operator-6c4d7847fc-qqnlx" err="pods \"cilium-operator-6c4d7847fc-qqnlx\" is forbidden: User \"system:node:ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532' and this object" Sep 4 00:07:03.586761 systemd[1]: Created slice kubepods-besteffort-podacfc7f93_aa2e_4886_ba84_59e875a7a960.slice - libcontainer container kubepods-besteffort-podacfc7f93_aa2e_4886_ba84_59e875a7a960.slice. Sep 4 00:07:03.615987 containerd[1569]: time="2025-09-04T00:07:03.615918232Z" level=info msg="connecting to shim 414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669" address="unix:///run/containerd/s/34ee9b81dbddb995fff5aa849bc4bd19429012d493df272c40cb1e20146ab476" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:03.638484 kubelet[2774]: I0904 00:07:03.637945 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acfc7f93-aa2e-4886-ba84-59e875a7a960-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qqnlx\" (UID: \"acfc7f93-aa2e-4886-ba84-59e875a7a960\") " pod="kube-system/cilium-operator-6c4d7847fc-qqnlx" Sep 4 00:07:03.638484 kubelet[2774]: I0904 00:07:03.638046 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrm79\" (UniqueName: \"kubernetes.io/projected/acfc7f93-aa2e-4886-ba84-59e875a7a960-kube-api-access-xrm79\") pod \"cilium-operator-6c4d7847fc-qqnlx\" (UID: \"acfc7f93-aa2e-4886-ba84-59e875a7a960\") " pod="kube-system/cilium-operator-6c4d7847fc-qqnlx" Sep 4 00:07:03.643383 systemd[1]: Started cri-containerd-7f9c7f8ea5da7c69301ea70918b7da4662b6f96a2472a0fb510830d3247fa409.scope - libcontainer container 7f9c7f8ea5da7c69301ea70918b7da4662b6f96a2472a0fb510830d3247fa409. Sep 4 00:07:03.684467 systemd[1]: Started cri-containerd-414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669.scope - libcontainer container 414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669. Sep 4 00:07:03.724776 containerd[1569]: time="2025-09-04T00:07:03.724605655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cf674,Uid:722b9a4b-5a4c-4c58-8583-f45b0973d530,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f9c7f8ea5da7c69301ea70918b7da4662b6f96a2472a0fb510830d3247fa409\"" Sep 4 00:07:03.733928 containerd[1569]: time="2025-09-04T00:07:03.733867475Z" level=info msg="CreateContainer within sandbox \"7f9c7f8ea5da7c69301ea70918b7da4662b6f96a2472a0fb510830d3247fa409\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 00:07:03.775274 containerd[1569]: time="2025-09-04T00:07:03.775165919Z" level=info msg="Container 08dfb4065a68234fc92701b87aded3a90d1e7911f38e910859fefe8afe4c730a: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:03.790280 containerd[1569]: time="2025-09-04T00:07:03.789687266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g84gp,Uid:20f07355-29b6-4076-83e0-c543cdd328b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\"" Sep 4 00:07:03.800748 containerd[1569]: time="2025-09-04T00:07:03.800245106Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 00:07:03.812731 containerd[1569]: time="2025-09-04T00:07:03.809993069Z" level=info msg="CreateContainer within sandbox \"7f9c7f8ea5da7c69301ea70918b7da4662b6f96a2472a0fb510830d3247fa409\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"08dfb4065a68234fc92701b87aded3a90d1e7911f38e910859fefe8afe4c730a\"" Sep 4 00:07:03.815037 containerd[1569]: time="2025-09-04T00:07:03.813826503Z" level=info msg="StartContainer for \"08dfb4065a68234fc92701b87aded3a90d1e7911f38e910859fefe8afe4c730a\"" Sep 4 00:07:03.823128 containerd[1569]: time="2025-09-04T00:07:03.823042688Z" level=info msg="connecting to shim 08dfb4065a68234fc92701b87aded3a90d1e7911f38e910859fefe8afe4c730a" address="unix:///run/containerd/s/af1de0e107e5523ac0858ee636317bbd629e345a3406c5749c7e63937d72d1f6" protocol=ttrpc version=3 Sep 4 00:07:03.881817 systemd[1]: Started cri-containerd-08dfb4065a68234fc92701b87aded3a90d1e7911f38e910859fefe8afe4c730a.scope - libcontainer container 08dfb4065a68234fc92701b87aded3a90d1e7911f38e910859fefe8afe4c730a. Sep 4 00:07:03.901872 containerd[1569]: time="2025-09-04T00:07:03.901743161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qqnlx,Uid:acfc7f93-aa2e-4886-ba84-59e875a7a960,Namespace:kube-system,Attempt:0,}" Sep 4 00:07:03.951070 containerd[1569]: time="2025-09-04T00:07:03.950247007Z" level=info msg="connecting to shim 2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e" address="unix:///run/containerd/s/05c61a4cec50762db9f32b74be06a9f01410b7c6b8f679b9ac0634773fbbb617" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:04.002360 containerd[1569]: time="2025-09-04T00:07:04.001662920Z" level=info msg="StartContainer for \"08dfb4065a68234fc92701b87aded3a90d1e7911f38e910859fefe8afe4c730a\" returns successfully" Sep 4 00:07:04.044731 systemd[1]: Started cri-containerd-2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e.scope - libcontainer container 2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e. Sep 4 00:07:04.171232 containerd[1569]: time="2025-09-04T00:07:04.171148419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qqnlx,Uid:acfc7f93-aa2e-4886-ba84-59e875a7a960,Namespace:kube-system,Attempt:0,} returns sandbox id \"2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e\"" Sep 4 00:07:04.833941 kubelet[2774]: I0904 00:07:04.833731 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cf674" podStartSLOduration=1.833696931 podStartE2EDuration="1.833696931s" podCreationTimestamp="2025-09-04 00:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:07:04.815204434 +0000 UTC m=+7.415364743" watchObservedRunningTime="2025-09-04 00:07:04.833696931 +0000 UTC m=+7.433857514" Sep 4 00:07:12.433531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount843506438.mount: Deactivated successfully. Sep 4 00:07:15.539141 containerd[1569]: time="2025-09-04T00:07:15.539044529Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:15.541391 containerd[1569]: time="2025-09-04T00:07:15.541050539Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 00:07:15.542706 containerd[1569]: time="2025-09-04T00:07:15.542655535Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:15.545187 containerd[1569]: time="2025-09-04T00:07:15.545132215Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.744823446s" Sep 4 00:07:15.545677 containerd[1569]: time="2025-09-04T00:07:15.545409874Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 00:07:15.547608 containerd[1569]: time="2025-09-04T00:07:15.547557631Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 00:07:15.551465 containerd[1569]: time="2025-09-04T00:07:15.551369355Z" level=info msg="CreateContainer within sandbox \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 00:07:15.567062 containerd[1569]: time="2025-09-04T00:07:15.566978800Z" level=info msg="Container d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:15.582431 containerd[1569]: time="2025-09-04T00:07:15.581665373Z" level=info msg="CreateContainer within sandbox \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\"" Sep 4 00:07:15.584802 containerd[1569]: time="2025-09-04T00:07:15.584754962Z" level=info msg="StartContainer for \"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\"" Sep 4 00:07:15.587549 containerd[1569]: time="2025-09-04T00:07:15.587480818Z" level=info msg="connecting to shim d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984" address="unix:///run/containerd/s/34ee9b81dbddb995fff5aa849bc4bd19429012d493df272c40cb1e20146ab476" protocol=ttrpc version=3 Sep 4 00:07:15.636391 systemd[1]: Started cri-containerd-d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984.scope - libcontainer container d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984. Sep 4 00:07:15.701766 containerd[1569]: time="2025-09-04T00:07:15.701705220Z" level=info msg="StartContainer for \"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\" returns successfully" Sep 4 00:07:15.736339 systemd[1]: cri-containerd-d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984.scope: Deactivated successfully. Sep 4 00:07:15.742040 containerd[1569]: time="2025-09-04T00:07:15.741836432Z" level=info msg="received exit event container_id:\"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\" id:\"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\" pid:3187 exited_at:{seconds:1756944435 nanos:741207160}" Sep 4 00:07:15.743290 containerd[1569]: time="2025-09-04T00:07:15.743239959Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\" id:\"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\" pid:3187 exited_at:{seconds:1756944435 nanos:741207160}" Sep 4 00:07:15.785739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984-rootfs.mount: Deactivated successfully. Sep 4 00:07:18.563335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881371601.mount: Deactivated successfully. Sep 4 00:07:18.863031 containerd[1569]: time="2025-09-04T00:07:18.860987062Z" level=info msg="CreateContainer within sandbox \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 00:07:18.898479 containerd[1569]: time="2025-09-04T00:07:18.898335207Z" level=info msg="Container 7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:18.908450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1750140701.mount: Deactivated successfully. Sep 4 00:07:18.909925 containerd[1569]: time="2025-09-04T00:07:18.909863614Z" level=info msg="CreateContainer within sandbox \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\"" Sep 4 00:07:18.911333 containerd[1569]: time="2025-09-04T00:07:18.911296556Z" level=info msg="StartContainer for \"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\"" Sep 4 00:07:18.912920 containerd[1569]: time="2025-09-04T00:07:18.912779702Z" level=info msg="connecting to shim 7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00" address="unix:///run/containerd/s/34ee9b81dbddb995fff5aa849bc4bd19429012d493df272c40cb1e20146ab476" protocol=ttrpc version=3 Sep 4 00:07:18.962590 systemd[1]: Started cri-containerd-7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00.scope - libcontainer container 7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00. Sep 4 00:07:19.036684 containerd[1569]: time="2025-09-04T00:07:19.036192231Z" level=info msg="StartContainer for \"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\" returns successfully" Sep 4 00:07:19.088078 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:07:19.088555 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:07:19.089393 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:07:19.094477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:07:19.100435 systemd[1]: cri-containerd-7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00.scope: Deactivated successfully. Sep 4 00:07:19.109961 containerd[1569]: time="2025-09-04T00:07:19.109904444Z" level=info msg="received exit event container_id:\"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\" id:\"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\" pid:3242 exited_at:{seconds:1756944439 nanos:107297672}" Sep 4 00:07:19.112222 containerd[1569]: time="2025-09-04T00:07:19.112146170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\" id:\"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\" pid:3242 exited_at:{seconds:1756944439 nanos:107297672}" Sep 4 00:07:19.158838 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:07:19.868182 containerd[1569]: time="2025-09-04T00:07:19.868100800Z" level=info msg="CreateContainer within sandbox \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 00:07:19.902591 containerd[1569]: time="2025-09-04T00:07:19.900393012Z" level=info msg="Container 5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:19.935969 containerd[1569]: time="2025-09-04T00:07:19.935889638Z" level=info msg="CreateContainer within sandbox \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\"" Sep 4 00:07:19.937469 containerd[1569]: time="2025-09-04T00:07:19.937415734Z" level=info msg="StartContainer for \"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\"" Sep 4 00:07:19.941915 containerd[1569]: time="2025-09-04T00:07:19.941686281Z" level=info msg="connecting to shim 5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4" address="unix:///run/containerd/s/34ee9b81dbddb995fff5aa849bc4bd19429012d493df272c40cb1e20146ab476" protocol=ttrpc version=3 Sep 4 00:07:19.950316 containerd[1569]: time="2025-09-04T00:07:19.950180605Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:19.953262 containerd[1569]: time="2025-09-04T00:07:19.953175317Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 00:07:19.962233 containerd[1569]: time="2025-09-04T00:07:19.962146021Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:07:19.966368 containerd[1569]: time="2025-09-04T00:07:19.965273259Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.417659985s" Sep 4 00:07:19.966368 containerd[1569]: time="2025-09-04T00:07:19.966094372Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 00:07:19.974076 containerd[1569]: time="2025-09-04T00:07:19.973344145Z" level=info msg="CreateContainer within sandbox \"2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 00:07:19.991936 containerd[1569]: time="2025-09-04T00:07:19.991855788Z" level=info msg="Container e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:20.003354 containerd[1569]: time="2025-09-04T00:07:20.003279427Z" level=info msg="CreateContainer within sandbox \"2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\"" Sep 4 00:07:20.007328 containerd[1569]: time="2025-09-04T00:07:20.006360963Z" level=info msg="StartContainer for \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\"" Sep 4 00:07:20.009833 containerd[1569]: time="2025-09-04T00:07:20.009778176Z" level=info msg="connecting to shim e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db" address="unix:///run/containerd/s/05c61a4cec50762db9f32b74be06a9f01410b7c6b8f679b9ac0634773fbbb617" protocol=ttrpc version=3 Sep 4 00:07:20.011781 systemd[1]: Started cri-containerd-5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4.scope - libcontainer container 5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4. Sep 4 00:07:20.055705 systemd[1]: Started cri-containerd-e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db.scope - libcontainer container e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db. Sep 4 00:07:20.126272 systemd[1]: cri-containerd-5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4.scope: Deactivated successfully. Sep 4 00:07:20.132786 containerd[1569]: time="2025-09-04T00:07:20.132492562Z" level=info msg="received exit event container_id:\"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\" id:\"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\" pid:3300 exited_at:{seconds:1756944440 nanos:131540968}" Sep 4 00:07:20.134640 containerd[1569]: time="2025-09-04T00:07:20.134578472Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\" id:\"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\" pid:3300 exited_at:{seconds:1756944440 nanos:131540968}" Sep 4 00:07:20.171716 containerd[1569]: time="2025-09-04T00:07:20.171660848Z" level=info msg="StartContainer for \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\" returns successfully" Sep 4 00:07:20.175511 containerd[1569]: time="2025-09-04T00:07:20.175386635Z" level=info msg="StartContainer for \"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\" returns successfully" Sep 4 00:07:20.549671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4-rootfs.mount: Deactivated successfully. Sep 4 00:07:20.884034 containerd[1569]: time="2025-09-04T00:07:20.882701139Z" level=info msg="CreateContainer within sandbox \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 00:07:20.903737 containerd[1569]: time="2025-09-04T00:07:20.900864863Z" level=info msg="Container abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:20.913065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88559176.mount: Deactivated successfully. Sep 4 00:07:20.925944 containerd[1569]: time="2025-09-04T00:07:20.925890973Z" level=info msg="CreateContainer within sandbox \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\"" Sep 4 00:07:20.929298 containerd[1569]: time="2025-09-04T00:07:20.929156818Z" level=info msg="StartContainer for \"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\"" Sep 4 00:07:20.932032 containerd[1569]: time="2025-09-04T00:07:20.931965865Z" level=info msg="connecting to shim abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8" address="unix:///run/containerd/s/34ee9b81dbddb995fff5aa849bc4bd19429012d493df272c40cb1e20146ab476" protocol=ttrpc version=3 Sep 4 00:07:20.981258 systemd[1]: Started cri-containerd-abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8.scope - libcontainer container abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8. Sep 4 00:07:21.081369 systemd[1]: cri-containerd-abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8.scope: Deactivated successfully. Sep 4 00:07:21.083266 containerd[1569]: time="2025-09-04T00:07:21.081464325Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\" id:\"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\" pid:3374 exited_at:{seconds:1756944441 nanos:80904195}" Sep 4 00:07:21.083266 containerd[1569]: time="2025-09-04T00:07:21.082390846Z" level=info msg="StartContainer for \"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\" returns successfully" Sep 4 00:07:21.083266 containerd[1569]: time="2025-09-04T00:07:21.082549471Z" level=info msg="received exit event container_id:\"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\" id:\"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\" pid:3374 exited_at:{seconds:1756944441 nanos:80904195}" Sep 4 00:07:21.147356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8-rootfs.mount: Deactivated successfully. Sep 4 00:07:21.894990 containerd[1569]: time="2025-09-04T00:07:21.894827462Z" level=info msg="CreateContainer within sandbox \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 00:07:21.915029 containerd[1569]: time="2025-09-04T00:07:21.914881954Z" level=info msg="Container 6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:21.923119 kubelet[2774]: I0904 00:07:21.920653 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qqnlx" podStartSLOduration=3.127678059 podStartE2EDuration="18.92062247s" podCreationTimestamp="2025-09-04 00:07:03 +0000 UTC" firstStartedPulling="2025-09-04 00:07:04.175260889 +0000 UTC m=+6.775421165" lastFinishedPulling="2025-09-04 00:07:19.968205299 +0000 UTC m=+22.568365576" observedRunningTime="2025-09-04 00:07:21.209631398 +0000 UTC m=+23.809791683" watchObservedRunningTime="2025-09-04 00:07:21.92062247 +0000 UTC m=+24.520782753" Sep 4 00:07:21.933887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1007841264.mount: Deactivated successfully. Sep 4 00:07:21.935874 containerd[1569]: time="2025-09-04T00:07:21.935795879Z" level=info msg="CreateContainer within sandbox \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\"" Sep 4 00:07:21.936895 containerd[1569]: time="2025-09-04T00:07:21.936855183Z" level=info msg="StartContainer for \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\"" Sep 4 00:07:21.938870 containerd[1569]: time="2025-09-04T00:07:21.938823116Z" level=info msg="connecting to shim 6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba" address="unix:///run/containerd/s/34ee9b81dbddb995fff5aa849bc4bd19429012d493df272c40cb1e20146ab476" protocol=ttrpc version=3 Sep 4 00:07:21.995304 systemd[1]: Started cri-containerd-6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba.scope - libcontainer container 6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba. Sep 4 00:07:22.069356 containerd[1569]: time="2025-09-04T00:07:22.069285562Z" level=info msg="StartContainer for \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" returns successfully" Sep 4 00:07:22.176373 containerd[1569]: time="2025-09-04T00:07:22.176229769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" id:\"0da8175a6584f5070d3cfe821bd12c8c0d1212abe88f278765357029bdd315d3\" pid:3441 exited_at:{seconds:1756944442 nanos:175664266}" Sep 4 00:07:22.203050 kubelet[2774]: I0904 00:07:22.202990 2774 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 00:07:22.264862 systemd[1]: Created slice kubepods-burstable-pod789473c8_d994_4d97_9f18_2707cac907b4.slice - libcontainer container kubepods-burstable-pod789473c8_d994_4d97_9f18_2707cac907b4.slice. Sep 4 00:07:22.281197 systemd[1]: Created slice kubepods-burstable-podc6430a4a_a8cf_489e_aef3_94b1de27c7d4.slice - libcontainer container kubepods-burstable-podc6430a4a_a8cf_489e_aef3_94b1de27c7d4.slice. Sep 4 00:07:22.286966 kubelet[2774]: I0904 00:07:22.286867 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg5bx\" (UniqueName: \"kubernetes.io/projected/c6430a4a-a8cf-489e-aef3-94b1de27c7d4-kube-api-access-wg5bx\") pod \"coredns-668d6bf9bc-6x4tt\" (UID: \"c6430a4a-a8cf-489e-aef3-94b1de27c7d4\") " pod="kube-system/coredns-668d6bf9bc-6x4tt" Sep 4 00:07:22.286966 kubelet[2774]: I0904 00:07:22.286951 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phngv\" (UniqueName: \"kubernetes.io/projected/789473c8-d994-4d97-9f18-2707cac907b4-kube-api-access-phngv\") pod \"coredns-668d6bf9bc-n4kzv\" (UID: \"789473c8-d994-4d97-9f18-2707cac907b4\") " pod="kube-system/coredns-668d6bf9bc-n4kzv" Sep 4 00:07:22.287220 kubelet[2774]: I0904 00:07:22.286988 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6430a4a-a8cf-489e-aef3-94b1de27c7d4-config-volume\") pod \"coredns-668d6bf9bc-6x4tt\" (UID: \"c6430a4a-a8cf-489e-aef3-94b1de27c7d4\") " pod="kube-system/coredns-668d6bf9bc-6x4tt" Sep 4 00:07:22.287545 kubelet[2774]: I0904 00:07:22.287274 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/789473c8-d994-4d97-9f18-2707cac907b4-config-volume\") pod \"coredns-668d6bf9bc-n4kzv\" (UID: \"789473c8-d994-4d97-9f18-2707cac907b4\") " pod="kube-system/coredns-668d6bf9bc-n4kzv" Sep 4 00:07:22.577035 containerd[1569]: time="2025-09-04T00:07:22.576406616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4kzv,Uid:789473c8-d994-4d97-9f18-2707cac907b4,Namespace:kube-system,Attempt:0,}" Sep 4 00:07:22.593277 containerd[1569]: time="2025-09-04T00:07:22.591858131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6x4tt,Uid:c6430a4a-a8cf-489e-aef3-94b1de27c7d4,Namespace:kube-system,Attempt:0,}" Sep 4 00:07:22.936888 kubelet[2774]: I0904 00:07:22.936806 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g84gp" podStartSLOduration=8.188310047 podStartE2EDuration="19.936779495s" podCreationTimestamp="2025-09-04 00:07:03 +0000 UTC" firstStartedPulling="2025-09-04 00:07:03.798725493 +0000 UTC m=+6.398885768" lastFinishedPulling="2025-09-04 00:07:15.547194943 +0000 UTC m=+18.147355216" observedRunningTime="2025-09-04 00:07:22.93537505 +0000 UTC m=+25.535535334" watchObservedRunningTime="2025-09-04 00:07:22.936779495 +0000 UTC m=+25.536939778" Sep 4 00:07:24.677785 systemd-networkd[1440]: cilium_host: Link UP Sep 4 00:07:24.679236 systemd-networkd[1440]: cilium_net: Link UP Sep 4 00:07:24.679548 systemd-networkd[1440]: cilium_net: Gained carrier Sep 4 00:07:24.679812 systemd-networkd[1440]: cilium_host: Gained carrier Sep 4 00:07:24.705654 systemd-networkd[1440]: cilium_net: Gained IPv6LL Sep 4 00:07:24.850834 systemd-networkd[1440]: cilium_vxlan: Link UP Sep 4 00:07:24.850850 systemd-networkd[1440]: cilium_vxlan: Gained carrier Sep 4 00:07:25.173069 kernel: NET: Registered PF_ALG protocol family Sep 4 00:07:25.178226 systemd-networkd[1440]: cilium_host: Gained IPv6LL Sep 4 00:07:26.216528 systemd-networkd[1440]: lxc_health: Link UP Sep 4 00:07:26.228795 systemd-networkd[1440]: lxc_health: Gained carrier Sep 4 00:07:26.299285 systemd-networkd[1440]: cilium_vxlan: Gained IPv6LL Sep 4 00:07:26.664059 kernel: eth0: renamed from tmp18abf Sep 4 00:07:26.678879 systemd-networkd[1440]: lxc88f6e814acdc: Link UP Sep 4 00:07:26.679762 systemd-networkd[1440]: lxc88f6e814acdc: Gained carrier Sep 4 00:07:26.703333 systemd-networkd[1440]: lxc11c6f74cb2f6: Link UP Sep 4 00:07:26.719148 kernel: eth0: renamed from tmp7f147 Sep 4 00:07:26.726535 systemd-networkd[1440]: lxc11c6f74cb2f6: Gained carrier Sep 4 00:07:27.771149 systemd-networkd[1440]: lxc88f6e814acdc: Gained IPv6LL Sep 4 00:07:28.218475 systemd-networkd[1440]: lxc_health: Gained IPv6LL Sep 4 00:07:28.282461 systemd-networkd[1440]: lxc11c6f74cb2f6: Gained IPv6LL Sep 4 00:07:30.716869 ntpd[1480]: Listen normally on 7 cilium_host 192.168.0.133:123 Sep 4 00:07:30.718291 ntpd[1480]: 4 Sep 00:07:30 ntpd[1480]: Listen normally on 7 cilium_host 192.168.0.133:123 Sep 4 00:07:30.718291 ntpd[1480]: 4 Sep 00:07:30 ntpd[1480]: Listen normally on 8 cilium_net [fe80::28d1:bdff:fe29:ddd0%4]:123 Sep 4 00:07:30.718291 ntpd[1480]: 4 Sep 00:07:30 ntpd[1480]: Listen normally on 9 cilium_host [fe80::e48a:86ff:fec5:f47d%5]:123 Sep 4 00:07:30.718291 ntpd[1480]: 4 Sep 00:07:30 ntpd[1480]: Listen normally on 10 cilium_vxlan [fe80::d8af:39ff:fed0:2623%6]:123 Sep 4 00:07:30.718291 ntpd[1480]: 4 Sep 00:07:30 ntpd[1480]: Listen normally on 11 lxc_health [fe80::b450:f1ff:fe8e:4e7a%8]:123 Sep 4 00:07:30.718291 ntpd[1480]: 4 Sep 00:07:30 ntpd[1480]: Listen normally on 12 lxc88f6e814acdc [fe80::4830:f4ff:fe6d:90b5%10]:123 Sep 4 00:07:30.718291 ntpd[1480]: 4 Sep 00:07:30 ntpd[1480]: Listen normally on 13 lxc11c6f74cb2f6 [fe80::a4f3:64ff:febc:c3c%12]:123 Sep 4 00:07:30.717077 ntpd[1480]: Listen normally on 8 cilium_net [fe80::28d1:bdff:fe29:ddd0%4]:123 Sep 4 00:07:30.717173 ntpd[1480]: Listen normally on 9 cilium_host [fe80::e48a:86ff:fec5:f47d%5]:123 Sep 4 00:07:30.717243 ntpd[1480]: Listen normally on 10 cilium_vxlan [fe80::d8af:39ff:fed0:2623%6]:123 Sep 4 00:07:30.717310 ntpd[1480]: Listen normally on 11 lxc_health [fe80::b450:f1ff:fe8e:4e7a%8]:123 Sep 4 00:07:30.717372 ntpd[1480]: Listen normally on 12 lxc88f6e814acdc [fe80::4830:f4ff:fe6d:90b5%10]:123 Sep 4 00:07:30.717435 ntpd[1480]: Listen normally on 13 lxc11c6f74cb2f6 [fe80::a4f3:64ff:febc:c3c%12]:123 Sep 4 00:07:32.090539 containerd[1569]: time="2025-09-04T00:07:32.090477164Z" level=info msg="connecting to shim 18abfd6ef58016447ec62ff58d0b8d8e8231759e0f6d29c2e92f34b75f439bce" address="unix:///run/containerd/s/d13a9770c8d8dcf716a67347934885e24435c2486d79267b2068faa3ffac8a35" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:32.151675 systemd[1]: Started cri-containerd-18abfd6ef58016447ec62ff58d0b8d8e8231759e0f6d29c2e92f34b75f439bce.scope - libcontainer container 18abfd6ef58016447ec62ff58d0b8d8e8231759e0f6d29c2e92f34b75f439bce. Sep 4 00:07:32.197532 containerd[1569]: time="2025-09-04T00:07:32.197097366Z" level=info msg="connecting to shim 7f14750d55f1dc1df7519fb24f9de948bb0b349434f9da62753452ed3fe9e964" address="unix:///run/containerd/s/f662ca4834121120b246c0db7c4054e6d8444588d78801f73cdbe63d83a02bfd" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:07:32.256740 systemd[1]: Started cri-containerd-7f14750d55f1dc1df7519fb24f9de948bb0b349434f9da62753452ed3fe9e964.scope - libcontainer container 7f14750d55f1dc1df7519fb24f9de948bb0b349434f9da62753452ed3fe9e964. Sep 4 00:07:32.330833 containerd[1569]: time="2025-09-04T00:07:32.330756971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4kzv,Uid:789473c8-d994-4d97-9f18-2707cac907b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"18abfd6ef58016447ec62ff58d0b8d8e8231759e0f6d29c2e92f34b75f439bce\"" Sep 4 00:07:32.339982 containerd[1569]: time="2025-09-04T00:07:32.338487804Z" level=info msg="CreateContainer within sandbox \"18abfd6ef58016447ec62ff58d0b8d8e8231759e0f6d29c2e92f34b75f439bce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:07:32.356697 containerd[1569]: time="2025-09-04T00:07:32.356639933Z" level=info msg="Container 46b64ed8ee9020152917c665c80d512c62c1fa6ab40d7043ec84c0fef71f8a37: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:32.384040 containerd[1569]: time="2025-09-04T00:07:32.382159812Z" level=info msg="CreateContainer within sandbox \"18abfd6ef58016447ec62ff58d0b8d8e8231759e0f6d29c2e92f34b75f439bce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46b64ed8ee9020152917c665c80d512c62c1fa6ab40d7043ec84c0fef71f8a37\"" Sep 4 00:07:32.385130 containerd[1569]: time="2025-09-04T00:07:32.385078728Z" level=info msg="StartContainer for \"46b64ed8ee9020152917c665c80d512c62c1fa6ab40d7043ec84c0fef71f8a37\"" Sep 4 00:07:32.389026 containerd[1569]: time="2025-09-04T00:07:32.388838378Z" level=info msg="connecting to shim 46b64ed8ee9020152917c665c80d512c62c1fa6ab40d7043ec84c0fef71f8a37" address="unix:///run/containerd/s/d13a9770c8d8dcf716a67347934885e24435c2486d79267b2068faa3ffac8a35" protocol=ttrpc version=3 Sep 4 00:07:32.404699 containerd[1569]: time="2025-09-04T00:07:32.404412851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6x4tt,Uid:c6430a4a-a8cf-489e-aef3-94b1de27c7d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f14750d55f1dc1df7519fb24f9de948bb0b349434f9da62753452ed3fe9e964\"" Sep 4 00:07:32.417390 containerd[1569]: time="2025-09-04T00:07:32.417137735Z" level=info msg="CreateContainer within sandbox \"7f14750d55f1dc1df7519fb24f9de948bb0b349434f9da62753452ed3fe9e964\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:07:32.426469 systemd[1]: Started cri-containerd-46b64ed8ee9020152917c665c80d512c62c1fa6ab40d7043ec84c0fef71f8a37.scope - libcontainer container 46b64ed8ee9020152917c665c80d512c62c1fa6ab40d7043ec84c0fef71f8a37. Sep 4 00:07:32.440145 containerd[1569]: time="2025-09-04T00:07:32.439763133Z" level=info msg="Container 5f99d68bcacacc9927b17313159eb09ab772518e1e98a1149b4a3112894ae721: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:07:32.455613 containerd[1569]: time="2025-09-04T00:07:32.455542988Z" level=info msg="CreateContainer within sandbox \"7f14750d55f1dc1df7519fb24f9de948bb0b349434f9da62753452ed3fe9e964\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f99d68bcacacc9927b17313159eb09ab772518e1e98a1149b4a3112894ae721\"" Sep 4 00:07:32.456618 containerd[1569]: time="2025-09-04T00:07:32.456520338Z" level=info msg="StartContainer for \"5f99d68bcacacc9927b17313159eb09ab772518e1e98a1149b4a3112894ae721\"" Sep 4 00:07:32.459677 containerd[1569]: time="2025-09-04T00:07:32.459614878Z" level=info msg="connecting to shim 5f99d68bcacacc9927b17313159eb09ab772518e1e98a1149b4a3112894ae721" address="unix:///run/containerd/s/f662ca4834121120b246c0db7c4054e6d8444588d78801f73cdbe63d83a02bfd" protocol=ttrpc version=3 Sep 4 00:07:32.493485 systemd[1]: Started cri-containerd-5f99d68bcacacc9927b17313159eb09ab772518e1e98a1149b4a3112894ae721.scope - libcontainer container 5f99d68bcacacc9927b17313159eb09ab772518e1e98a1149b4a3112894ae721. Sep 4 00:07:32.605188 containerd[1569]: time="2025-09-04T00:07:32.605100144Z" level=info msg="StartContainer for \"46b64ed8ee9020152917c665c80d512c62c1fa6ab40d7043ec84c0fef71f8a37\" returns successfully" Sep 4 00:07:32.634654 containerd[1569]: time="2025-09-04T00:07:32.632982315Z" level=info msg="StartContainer for \"5f99d68bcacacc9927b17313159eb09ab772518e1e98a1149b4a3112894ae721\" returns successfully" Sep 4 00:07:32.957195 kubelet[2774]: I0904 00:07:32.956828 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6x4tt" podStartSLOduration=29.956798155 podStartE2EDuration="29.956798155s" podCreationTimestamp="2025-09-04 00:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:07:32.955718661 +0000 UTC m=+35.555878943" watchObservedRunningTime="2025-09-04 00:07:32.956798155 +0000 UTC m=+35.556958438" Sep 4 00:07:33.008990 kubelet[2774]: I0904 00:07:33.008888 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n4kzv" podStartSLOduration=30.008857456 podStartE2EDuration="30.008857456s" podCreationTimestamp="2025-09-04 00:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:07:32.986386979 +0000 UTC m=+35.586547261" watchObservedRunningTime="2025-09-04 00:07:33.008857456 +0000 UTC m=+35.609017739" Sep 4 00:08:27.407120 systemd[1]: Started sshd@9-10.128.0.81:22-147.75.109.163:46690.service - OpenSSH per-connection server daemon (147.75.109.163:46690). Sep 4 00:08:27.719229 sshd[4089]: Accepted publickey for core from 147.75.109.163 port 46690 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:27.721381 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:27.730085 systemd-logind[1486]: New session 10 of user core. Sep 4 00:08:27.737272 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 00:08:28.066824 sshd[4091]: Connection closed by 147.75.109.163 port 46690 Sep 4 00:08:28.067751 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:28.075629 systemd[1]: sshd@9-10.128.0.81:22-147.75.109.163:46690.service: Deactivated successfully. Sep 4 00:08:28.079467 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 00:08:28.082461 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Sep 4 00:08:28.084878 systemd-logind[1486]: Removed session 10. Sep 4 00:08:33.128513 systemd[1]: Started sshd@10-10.128.0.81:22-147.75.109.163:38210.service - OpenSSH per-connection server daemon (147.75.109.163:38210). Sep 4 00:08:33.450598 sshd[4106]: Accepted publickey for core from 147.75.109.163 port 38210 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:33.452751 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:33.461260 systemd-logind[1486]: New session 11 of user core. Sep 4 00:08:33.467329 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 00:08:33.764583 sshd[4108]: Connection closed by 147.75.109.163 port 38210 Sep 4 00:08:33.765665 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:33.773423 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Sep 4 00:08:33.774530 systemd[1]: sshd@10-10.128.0.81:22-147.75.109.163:38210.service: Deactivated successfully. Sep 4 00:08:33.778464 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 00:08:33.782507 systemd-logind[1486]: Removed session 11. Sep 4 00:08:38.822146 systemd[1]: Started sshd@11-10.128.0.81:22-147.75.109.163:38218.service - OpenSSH per-connection server daemon (147.75.109.163:38218). Sep 4 00:08:39.133323 sshd[4124]: Accepted publickey for core from 147.75.109.163 port 38218 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:39.135325 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:39.142110 systemd-logind[1486]: New session 12 of user core. Sep 4 00:08:39.149343 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 00:08:39.434428 sshd[4126]: Connection closed by 147.75.109.163 port 38218 Sep 4 00:08:39.435343 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:39.441271 systemd[1]: sshd@11-10.128.0.81:22-147.75.109.163:38218.service: Deactivated successfully. Sep 4 00:08:39.445156 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 00:08:39.447058 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Sep 4 00:08:39.450143 systemd-logind[1486]: Removed session 12. Sep 4 00:08:44.491441 systemd[1]: Started sshd@12-10.128.0.81:22-147.75.109.163:35160.service - OpenSSH per-connection server daemon (147.75.109.163:35160). Sep 4 00:08:44.808659 sshd[4139]: Accepted publickey for core from 147.75.109.163 port 35160 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:44.810632 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:44.818089 systemd-logind[1486]: New session 13 of user core. Sep 4 00:08:44.827301 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 00:08:45.108618 sshd[4141]: Connection closed by 147.75.109.163 port 35160 Sep 4 00:08:45.109668 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:45.115120 systemd[1]: sshd@12-10.128.0.81:22-147.75.109.163:35160.service: Deactivated successfully. Sep 4 00:08:45.119255 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 00:08:45.123880 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Sep 4 00:08:45.126351 systemd-logind[1486]: Removed session 13. Sep 4 00:08:50.170515 systemd[1]: Started sshd@13-10.128.0.81:22-147.75.109.163:47938.service - OpenSSH per-connection server daemon (147.75.109.163:47938). Sep 4 00:08:50.492872 sshd[4154]: Accepted publickey for core from 147.75.109.163 port 47938 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:50.495199 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:50.503789 systemd-logind[1486]: New session 14 of user core. Sep 4 00:08:50.510319 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 00:08:50.799385 sshd[4156]: Connection closed by 147.75.109.163 port 47938 Sep 4 00:08:50.801292 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:50.808424 systemd[1]: sshd@13-10.128.0.81:22-147.75.109.163:47938.service: Deactivated successfully. Sep 4 00:08:50.812468 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 00:08:50.814442 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Sep 4 00:08:50.817815 systemd-logind[1486]: Removed session 14. Sep 4 00:08:50.858237 systemd[1]: Started sshd@14-10.128.0.81:22-147.75.109.163:47946.service - OpenSSH per-connection server daemon (147.75.109.163:47946). Sep 4 00:08:51.198059 sshd[4169]: Accepted publickey for core from 147.75.109.163 port 47946 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:51.200513 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:51.209198 systemd-logind[1486]: New session 15 of user core. Sep 4 00:08:51.213273 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 00:08:51.556697 sshd[4171]: Connection closed by 147.75.109.163 port 47946 Sep 4 00:08:51.558354 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:51.571885 systemd[1]: sshd@14-10.128.0.81:22-147.75.109.163:47946.service: Deactivated successfully. Sep 4 00:08:51.577024 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 00:08:51.579073 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Sep 4 00:08:51.581917 systemd-logind[1486]: Removed session 15. Sep 4 00:08:51.615857 systemd[1]: Started sshd@15-10.128.0.81:22-147.75.109.163:47960.service - OpenSSH per-connection server daemon (147.75.109.163:47960). Sep 4 00:08:51.936134 sshd[4181]: Accepted publickey for core from 147.75.109.163 port 47960 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:51.939375 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:51.954291 systemd-logind[1486]: New session 16 of user core. Sep 4 00:08:51.960313 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 00:08:52.253479 sshd[4183]: Connection closed by 147.75.109.163 port 47960 Sep 4 00:08:52.254844 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:52.261974 systemd[1]: sshd@15-10.128.0.81:22-147.75.109.163:47960.service: Deactivated successfully. Sep 4 00:08:52.265316 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 00:08:52.267332 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Sep 4 00:08:52.270637 systemd-logind[1486]: Removed session 16. Sep 4 00:08:57.314041 systemd[1]: Started sshd@16-10.128.0.81:22-147.75.109.163:47972.service - OpenSSH per-connection server daemon (147.75.109.163:47972). Sep 4 00:08:57.634263 sshd[4195]: Accepted publickey for core from 147.75.109.163 port 47972 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:08:57.636207 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:08:57.645760 systemd-logind[1486]: New session 17 of user core. Sep 4 00:08:57.650272 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 00:08:57.953896 sshd[4197]: Connection closed by 147.75.109.163 port 47972 Sep 4 00:08:57.954613 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Sep 4 00:08:57.959970 systemd[1]: sshd@16-10.128.0.81:22-147.75.109.163:47972.service: Deactivated successfully. Sep 4 00:08:57.962928 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 00:08:57.965703 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Sep 4 00:08:57.969493 systemd-logind[1486]: Removed session 17. Sep 4 00:09:03.010086 systemd[1]: Started sshd@17-10.128.0.81:22-147.75.109.163:33602.service - OpenSSH per-connection server daemon (147.75.109.163:33602). Sep 4 00:09:03.319083 sshd[4212]: Accepted publickey for core from 147.75.109.163 port 33602 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:03.320969 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:03.328867 systemd-logind[1486]: New session 18 of user core. Sep 4 00:09:03.333518 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 00:09:03.639990 sshd[4214]: Connection closed by 147.75.109.163 port 33602 Sep 4 00:09:03.641225 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:03.648496 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Sep 4 00:09:03.649850 systemd[1]: sshd@17-10.128.0.81:22-147.75.109.163:33602.service: Deactivated successfully. Sep 4 00:09:03.653343 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 00:09:03.657033 systemd-logind[1486]: Removed session 18. Sep 4 00:09:03.704467 systemd[1]: Started sshd@18-10.128.0.81:22-147.75.109.163:33614.service - OpenSSH per-connection server daemon (147.75.109.163:33614). Sep 4 00:09:04.021747 sshd[4226]: Accepted publickey for core from 147.75.109.163 port 33614 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:04.024583 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:04.033094 systemd-logind[1486]: New session 19 of user core. Sep 4 00:09:04.042434 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 00:09:04.395463 sshd[4228]: Connection closed by 147.75.109.163 port 33614 Sep 4 00:09:04.397461 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:04.404707 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Sep 4 00:09:04.406271 systemd[1]: sshd@18-10.128.0.81:22-147.75.109.163:33614.service: Deactivated successfully. Sep 4 00:09:04.409770 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 00:09:04.413687 systemd-logind[1486]: Removed session 19. Sep 4 00:09:04.465522 systemd[1]: Started sshd@19-10.128.0.81:22-147.75.109.163:33620.service - OpenSSH per-connection server daemon (147.75.109.163:33620). Sep 4 00:09:04.782116 sshd[4240]: Accepted publickey for core from 147.75.109.163 port 33620 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:04.784585 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:04.794020 systemd-logind[1486]: New session 20 of user core. Sep 4 00:09:04.801473 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 00:09:05.711257 sshd[4242]: Connection closed by 147.75.109.163 port 33620 Sep 4 00:09:05.713781 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:05.725349 systemd[1]: sshd@19-10.128.0.81:22-147.75.109.163:33620.service: Deactivated successfully. Sep 4 00:09:05.731514 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 00:09:05.737285 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Sep 4 00:09:05.739960 systemd-logind[1486]: Removed session 20. Sep 4 00:09:05.772079 systemd[1]: Started sshd@20-10.128.0.81:22-147.75.109.163:33630.service - OpenSSH per-connection server daemon (147.75.109.163:33630). Sep 4 00:09:06.002291 systemd[1]: Started sshd@21-10.128.0.81:22-172.236.228.222:27634.service - OpenSSH per-connection server daemon (172.236.228.222:27634). Sep 4 00:09:06.093357 sshd[4259]: Accepted publickey for core from 147.75.109.163 port 33630 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:06.095098 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:06.104363 systemd-logind[1486]: New session 21 of user core. Sep 4 00:09:06.108281 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 00:09:06.527108 sshd[4263]: Connection closed by 147.75.109.163 port 33630 Sep 4 00:09:06.528116 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:06.534338 systemd[1]: sshd@20-10.128.0.81:22-147.75.109.163:33630.service: Deactivated successfully. Sep 4 00:09:06.538111 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 00:09:06.539447 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Sep 4 00:09:06.542100 systemd-logind[1486]: Removed session 21. Sep 4 00:09:06.583812 systemd[1]: Started sshd@22-10.128.0.81:22-147.75.109.163:33634.service - OpenSSH per-connection server daemon (147.75.109.163:33634). Sep 4 00:09:06.907325 sshd[4274]: Accepted publickey for core from 147.75.109.163 port 33634 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:06.908846 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:06.916545 systemd-logind[1486]: New session 22 of user core. Sep 4 00:09:06.924284 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 00:09:07.018771 sshd[4262]: Connection closed by 172.236.228.222 port 27634 [preauth] Sep 4 00:09:07.021799 systemd[1]: sshd@21-10.128.0.81:22-172.236.228.222:27634.service: Deactivated successfully. Sep 4 00:09:07.093770 systemd[1]: Started sshd@23-10.128.0.81:22-172.236.228.222:27638.service - OpenSSH per-connection server daemon (172.236.228.222:27638). Sep 4 00:09:07.223420 sshd[4276]: Connection closed by 147.75.109.163 port 33634 Sep 4 00:09:07.224927 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:07.230092 systemd[1]: sshd@22-10.128.0.81:22-147.75.109.163:33634.service: Deactivated successfully. Sep 4 00:09:07.232933 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 00:09:07.236996 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Sep 4 00:09:07.238946 systemd-logind[1486]: Removed session 22. Sep 4 00:09:08.140258 sshd[4286]: Connection closed by 172.236.228.222 port 27638 [preauth] Sep 4 00:09:08.141916 systemd[1]: sshd@23-10.128.0.81:22-172.236.228.222:27638.service: Deactivated successfully. Sep 4 00:09:08.212485 systemd[1]: Started sshd@24-10.128.0.81:22-172.236.228.222:27646.service - OpenSSH per-connection server daemon (172.236.228.222:27646). Sep 4 00:09:09.237322 sshd[4295]: Connection closed by 172.236.228.222 port 27646 [preauth] Sep 4 00:09:09.240340 systemd[1]: sshd@24-10.128.0.81:22-172.236.228.222:27646.service: Deactivated successfully. Sep 4 00:09:12.279184 systemd[1]: Started sshd@25-10.128.0.81:22-147.75.109.163:57242.service - OpenSSH per-connection server daemon (147.75.109.163:57242). Sep 4 00:09:12.598986 sshd[4300]: Accepted publickey for core from 147.75.109.163 port 57242 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:12.600971 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:12.609164 systemd-logind[1486]: New session 23 of user core. Sep 4 00:09:12.613225 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 00:09:12.898443 sshd[4304]: Connection closed by 147.75.109.163 port 57242 Sep 4 00:09:12.899943 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:12.906264 systemd[1]: sshd@25-10.128.0.81:22-147.75.109.163:57242.service: Deactivated successfully. Sep 4 00:09:12.909547 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 00:09:12.912493 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Sep 4 00:09:12.915578 systemd-logind[1486]: Removed session 23. Sep 4 00:09:17.959320 systemd[1]: Started sshd@26-10.128.0.81:22-147.75.109.163:57250.service - OpenSSH per-connection server daemon (147.75.109.163:57250). Sep 4 00:09:18.273823 sshd[4316]: Accepted publickey for core from 147.75.109.163 port 57250 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:18.276047 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:18.284430 systemd-logind[1486]: New session 24 of user core. Sep 4 00:09:18.296330 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 00:09:18.573385 sshd[4318]: Connection closed by 147.75.109.163 port 57250 Sep 4 00:09:18.573825 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:18.581529 systemd[1]: sshd@26-10.128.0.81:22-147.75.109.163:57250.service: Deactivated successfully. Sep 4 00:09:18.585478 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 00:09:18.587257 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. Sep 4 00:09:18.590468 systemd-logind[1486]: Removed session 24. Sep 4 00:09:23.629830 systemd[1]: Started sshd@27-10.128.0.81:22-147.75.109.163:58778.service - OpenSSH per-connection server daemon (147.75.109.163:58778). Sep 4 00:09:23.948939 sshd[4332]: Accepted publickey for core from 147.75.109.163 port 58778 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:23.950930 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:23.961549 systemd-logind[1486]: New session 25 of user core. Sep 4 00:09:23.968278 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 00:09:24.247331 sshd[4335]: Connection closed by 147.75.109.163 port 58778 Sep 4 00:09:24.248401 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:24.256895 systemd-logind[1486]: Session 25 logged out. Waiting for processes to exit. Sep 4 00:09:24.257177 systemd[1]: sshd@27-10.128.0.81:22-147.75.109.163:58778.service: Deactivated successfully. Sep 4 00:09:24.261674 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 00:09:24.264940 systemd-logind[1486]: Removed session 25. Sep 4 00:09:24.308770 systemd[1]: Started sshd@28-10.128.0.81:22-147.75.109.163:58788.service - OpenSSH per-connection server daemon (147.75.109.163:58788). Sep 4 00:09:24.618434 sshd[4347]: Accepted publickey for core from 147.75.109.163 port 58788 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:24.620371 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:24.629132 systemd-logind[1486]: New session 26 of user core. Sep 4 00:09:24.637316 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 00:09:26.470706 containerd[1569]: time="2025-09-04T00:09:26.470319568Z" level=info msg="StopContainer for \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\" with timeout 30 (s)" Sep 4 00:09:26.475387 containerd[1569]: time="2025-09-04T00:09:26.475270039Z" level=info msg="Stop container \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\" with signal terminated" Sep 4 00:09:26.499915 containerd[1569]: time="2025-09-04T00:09:26.499855040Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:09:26.501179 systemd[1]: cri-containerd-e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db.scope: Deactivated successfully. Sep 4 00:09:26.507560 containerd[1569]: time="2025-09-04T00:09:26.507495513Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\" id:\"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\" pid:3317 exited_at:{seconds:1756944566 nanos:506645468}" Sep 4 00:09:26.507718 containerd[1569]: time="2025-09-04T00:09:26.507586579Z" level=info msg="received exit event container_id:\"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\" id:\"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\" pid:3317 exited_at:{seconds:1756944566 nanos:506645468}" Sep 4 00:09:26.515243 containerd[1569]: time="2025-09-04T00:09:26.515181320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" id:\"363d8a0539bdd21ddef7118e0eb0b0304cd56441254f7120dbf6c16404a2c041\" pid:4368 exited_at:{seconds:1756944566 nanos:514641804}" Sep 4 00:09:26.519845 containerd[1569]: time="2025-09-04T00:09:26.519767709Z" level=info msg="StopContainer for \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" with timeout 2 (s)" Sep 4 00:09:26.521232 containerd[1569]: time="2025-09-04T00:09:26.521194131Z" level=info msg="Stop container \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" with signal terminated" Sep 4 00:09:26.538902 systemd-networkd[1440]: lxc_health: Link DOWN Sep 4 00:09:26.538915 systemd-networkd[1440]: lxc_health: Lost carrier Sep 4 00:09:26.562767 systemd[1]: cri-containerd-6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba.scope: Deactivated successfully. Sep 4 00:09:26.563748 systemd[1]: cri-containerd-6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba.scope: Consumed 10.259s CPU time, 125.2M memory peak, 144K read from disk, 13.3M written to disk. Sep 4 00:09:26.567470 containerd[1569]: time="2025-09-04T00:09:26.567392493Z" level=info msg="received exit event container_id:\"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" id:\"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" pid:3411 exited_at:{seconds:1756944566 nanos:565328817}" Sep 4 00:09:26.568851 containerd[1569]: time="2025-09-04T00:09:26.568555551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" id:\"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" pid:3411 exited_at:{seconds:1756944566 nanos:565328817}" Sep 4 00:09:26.583295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db-rootfs.mount: Deactivated successfully. Sep 4 00:09:26.620847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba-rootfs.mount: Deactivated successfully. Sep 4 00:09:26.628505 containerd[1569]: time="2025-09-04T00:09:26.628428154Z" level=info msg="StopContainer for \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" returns successfully" Sep 4 00:09:26.631169 containerd[1569]: time="2025-09-04T00:09:26.631123777Z" level=info msg="StopPodSandbox for \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\"" Sep 4 00:09:26.631425 containerd[1569]: time="2025-09-04T00:09:26.631216280Z" level=info msg="Container to stop \"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:09:26.631425 containerd[1569]: time="2025-09-04T00:09:26.631236211Z" level=info msg="Container to stop \"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:09:26.631425 containerd[1569]: time="2025-09-04T00:09:26.631255061Z" level=info msg="Container to stop \"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:09:26.631425 containerd[1569]: time="2025-09-04T00:09:26.631268737Z" level=info msg="Container to stop \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:09:26.631425 containerd[1569]: time="2025-09-04T00:09:26.631286344Z" level=info msg="Container to stop \"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:09:26.634139 containerd[1569]: time="2025-09-04T00:09:26.633993590Z" level=info msg="StopContainer for \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\" returns successfully" Sep 4 00:09:26.635311 containerd[1569]: time="2025-09-04T00:09:26.635250246Z" level=info msg="StopPodSandbox for \"2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e\"" Sep 4 00:09:26.635446 containerd[1569]: time="2025-09-04T00:09:26.635340780Z" level=info msg="Container to stop \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:09:26.655686 systemd[1]: cri-containerd-2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e.scope: Deactivated successfully. Sep 4 00:09:26.658491 systemd[1]: cri-containerd-414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669.scope: Deactivated successfully. Sep 4 00:09:26.665776 containerd[1569]: time="2025-09-04T00:09:26.665728441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e\" id:\"2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e\" pid:3005 exit_status:137 exited_at:{seconds:1756944566 nanos:664779347}" Sep 4 00:09:26.717311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669-rootfs.mount: Deactivated successfully. Sep 4 00:09:26.726116 containerd[1569]: time="2025-09-04T00:09:26.725885617Z" level=info msg="shim disconnected" id=414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669 namespace=k8s.io Sep 4 00:09:26.726116 containerd[1569]: time="2025-09-04T00:09:26.725928847Z" level=warning msg="cleaning up after shim disconnected" id=414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669 namespace=k8s.io Sep 4 00:09:26.726116 containerd[1569]: time="2025-09-04T00:09:26.725943513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 00:09:26.742728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e-rootfs.mount: Deactivated successfully. Sep 4 00:09:26.747295 containerd[1569]: time="2025-09-04T00:09:26.746987545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" id:\"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" pid:2922 exit_status:137 exited_at:{seconds:1756944566 nanos:670387052}" Sep 4 00:09:26.752287 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e-shm.mount: Deactivated successfully. Sep 4 00:09:26.755348 containerd[1569]: time="2025-09-04T00:09:26.747919085Z" level=info msg="received exit event sandbox_id:\"2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e\" exit_status:137 exited_at:{seconds:1756944566 nanos:664779347}" Sep 4 00:09:26.755348 containerd[1569]: time="2025-09-04T00:09:26.755299442Z" level=info msg="received exit event sandbox_id:\"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" exit_status:137 exited_at:{seconds:1756944566 nanos:670387052}" Sep 4 00:09:26.756637 containerd[1569]: time="2025-09-04T00:09:26.756567766Z" level=info msg="TearDown network for sandbox \"2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e\" successfully" Sep 4 00:09:26.756762 containerd[1569]: time="2025-09-04T00:09:26.756610032Z" level=info msg="StopPodSandbox for \"2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e\" returns successfully" Sep 4 00:09:26.757666 containerd[1569]: time="2025-09-04T00:09:26.747128957Z" level=info msg="shim disconnected" id=2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e namespace=k8s.io Sep 4 00:09:26.757666 containerd[1569]: time="2025-09-04T00:09:26.757259912Z" level=warning msg="cleaning up after shim disconnected" id=2587fbae726e127454fd68fa1428adbd3606276fd1d77dcdce1d47012c67bf2e namespace=k8s.io Sep 4 00:09:26.757666 containerd[1569]: time="2025-09-04T00:09:26.757272175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 00:09:26.764105 containerd[1569]: time="2025-09-04T00:09:26.764060875Z" level=info msg="TearDown network for sandbox \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" successfully" Sep 4 00:09:26.764778 containerd[1569]: time="2025-09-04T00:09:26.764380541Z" level=info msg="StopPodSandbox for \"414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669\" returns successfully" Sep 4 00:09:26.907045 kubelet[2774]: I0904 00:09:26.906926 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20f07355-29b6-4076-83e0-c543cdd328b4-clustermesh-secrets\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909274 kubelet[2774]: I0904 00:09:26.906999 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-config-path\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909274 kubelet[2774]: I0904 00:09:26.908075 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-host-proc-sys-kernel\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909274 kubelet[2774]: I0904 00:09:26.908117 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acfc7f93-aa2e-4886-ba84-59e875a7a960-cilium-config-path\") pod \"acfc7f93-aa2e-4886-ba84-59e875a7a960\" (UID: \"acfc7f93-aa2e-4886-ba84-59e875a7a960\") " Sep 4 00:09:26.909274 kubelet[2774]: I0904 00:09:26.908146 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-cgroup\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909274 kubelet[2774]: I0904 00:09:26.908176 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cni-path\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909274 kubelet[2774]: I0904 00:09:26.908210 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8fsb\" (UniqueName: \"kubernetes.io/projected/20f07355-29b6-4076-83e0-c543cdd328b4-kube-api-access-b8fsb\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909625 kubelet[2774]: I0904 00:09:26.908254 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-host-proc-sys-net\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909625 kubelet[2774]: I0904 00:09:26.908357 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-lib-modules\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909625 kubelet[2774]: I0904 00:09:26.908388 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-run\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909625 kubelet[2774]: I0904 00:09:26.908413 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-hostproc\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909625 kubelet[2774]: I0904 00:09:26.908439 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-bpf-maps\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909625 kubelet[2774]: I0904 00:09:26.908470 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-etc-cni-netd\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909980 kubelet[2774]: I0904 00:09:26.908499 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-xtables-lock\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909980 kubelet[2774]: I0904 00:09:26.908531 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20f07355-29b6-4076-83e0-c543cdd328b4-hubble-tls\") pod \"20f07355-29b6-4076-83e0-c543cdd328b4\" (UID: \"20f07355-29b6-4076-83e0-c543cdd328b4\") " Sep 4 00:09:26.909980 kubelet[2774]: I0904 00:09:26.908564 2774 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrm79\" (UniqueName: \"kubernetes.io/projected/acfc7f93-aa2e-4886-ba84-59e875a7a960-kube-api-access-xrm79\") pod \"acfc7f93-aa2e-4886-ba84-59e875a7a960\" (UID: \"acfc7f93-aa2e-4886-ba84-59e875a7a960\") " Sep 4 00:09:26.910390 kubelet[2774]: I0904 00:09:26.910331 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:09:26.911203 kubelet[2774]: I0904 00:09:26.911134 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:09:26.912432 kubelet[2774]: I0904 00:09:26.911687 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:09:26.912432 kubelet[2774]: I0904 00:09:26.911716 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:09:26.912432 kubelet[2774]: I0904 00:09:26.911734 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-hostproc" (OuterVolumeSpecName: "hostproc") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:09:26.912432 kubelet[2774]: I0904 00:09:26.911786 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:09:26.912912 kubelet[2774]: I0904 00:09:26.911806 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:09:26.912912 kubelet[2774]: I0904 00:09:26.911949 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:09:26.913346 kubelet[2774]: I0904 00:09:26.913294 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cni-path" (OuterVolumeSpecName: "cni-path") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:09:26.913531 kubelet[2774]: I0904 00:09:26.913498 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:09:26.919597 kubelet[2774]: I0904 00:09:26.919406 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f07355-29b6-4076-83e0-c543cdd328b4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 00:09:26.919597 kubelet[2774]: I0904 00:09:26.919545 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acfc7f93-aa2e-4886-ba84-59e875a7a960-kube-api-access-xrm79" (OuterVolumeSpecName: "kube-api-access-xrm79") pod "acfc7f93-aa2e-4886-ba84-59e875a7a960" (UID: "acfc7f93-aa2e-4886-ba84-59e875a7a960"). InnerVolumeSpecName "kube-api-access-xrm79". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:09:26.920277 kubelet[2774]: I0904 00:09:26.920221 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acfc7f93-aa2e-4886-ba84-59e875a7a960-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "acfc7f93-aa2e-4886-ba84-59e875a7a960" (UID: "acfc7f93-aa2e-4886-ba84-59e875a7a960"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 00:09:26.921906 kubelet[2774]: I0904 00:09:26.921869 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 00:09:26.923438 kubelet[2774]: I0904 00:09:26.923376 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f07355-29b6-4076-83e0-c543cdd328b4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:09:26.924447 kubelet[2774]: I0904 00:09:26.924408 2774 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f07355-29b6-4076-83e0-c543cdd328b4-kube-api-access-b8fsb" (OuterVolumeSpecName: "kube-api-access-b8fsb") pod "20f07355-29b6-4076-83e0-c543cdd328b4" (UID: "20f07355-29b6-4076-83e0-c543cdd328b4"). InnerVolumeSpecName "kube-api-access-b8fsb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:09:27.009502 kubelet[2774]: I0904 00:09:27.009283 2774 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-lib-modules\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.009502 kubelet[2774]: I0904 00:09:27.009351 2774 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-bpf-maps\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.009502 kubelet[2774]: I0904 00:09:27.009368 2774 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-run\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.009502 kubelet[2774]: I0904 00:09:27.009391 2774 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-hostproc\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.009502 kubelet[2774]: I0904 00:09:27.009431 2774 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-xtables-lock\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.009502 kubelet[2774]: I0904 00:09:27.009448 2774 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20f07355-29b6-4076-83e0-c543cdd328b4-hubble-tls\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.009502 kubelet[2774]: I0904 00:09:27.009474 2774 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xrm79\" (UniqueName: \"kubernetes.io/projected/acfc7f93-aa2e-4886-ba84-59e875a7a960-kube-api-access-xrm79\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.010072 kubelet[2774]: I0904 00:09:27.009495 2774 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-etc-cni-netd\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.010072 kubelet[2774]: I0904 00:09:27.009512 2774 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20f07355-29b6-4076-83e0-c543cdd328b4-clustermesh-secrets\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.010072 kubelet[2774]: I0904 00:09:27.009528 2774 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-config-path\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.010072 kubelet[2774]: I0904 00:09:27.009550 2774 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-host-proc-sys-kernel\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.010072 kubelet[2774]: I0904 00:09:27.009571 2774 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acfc7f93-aa2e-4886-ba84-59e875a7a960-cilium-config-path\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.010072 kubelet[2774]: I0904 00:09:27.009590 2774 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cilium-cgroup\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.010072 kubelet[2774]: I0904 00:09:27.009609 2774 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-cni-path\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.010440 kubelet[2774]: I0904 00:09:27.009624 2774 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b8fsb\" (UniqueName: \"kubernetes.io/projected/20f07355-29b6-4076-83e0-c543cdd328b4-kube-api-access-b8fsb\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.010440 kubelet[2774]: I0904 00:09:27.009641 2774 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20f07355-29b6-4076-83e0-c543cdd328b4-host-proc-sys-net\") on node \"ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532\" DevicePath \"\"" Sep 4 00:09:27.271837 kubelet[2774]: I0904 00:09:27.271242 2774 scope.go:117] "RemoveContainer" containerID="e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db" Sep 4 00:09:27.282738 containerd[1569]: time="2025-09-04T00:09:27.280711601Z" level=info msg="RemoveContainer for \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\"" Sep 4 00:09:27.289358 systemd[1]: Removed slice kubepods-besteffort-podacfc7f93_aa2e_4886_ba84_59e875a7a960.slice - libcontainer container kubepods-besteffort-podacfc7f93_aa2e_4886_ba84_59e875a7a960.slice. Sep 4 00:09:27.292034 containerd[1569]: time="2025-09-04T00:09:27.291518217Z" level=info msg="RemoveContainer for \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\" returns successfully" Sep 4 00:09:27.292859 kubelet[2774]: I0904 00:09:27.292823 2774 scope.go:117] "RemoveContainer" containerID="e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db" Sep 4 00:09:27.293778 containerd[1569]: time="2025-09-04T00:09:27.293672793Z" level=error msg="ContainerStatus for \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\": not found" Sep 4 00:09:27.295405 kubelet[2774]: E0904 00:09:27.295032 2774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\": not found" containerID="e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db" Sep 4 00:09:27.295405 kubelet[2774]: I0904 00:09:27.295146 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db"} err="failed to get container status \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2fa39d6ae0bcd96e3ecc836d5811952ec77ef0ef7578d896b0708686a5970db\": not found" Sep 4 00:09:27.302142 kubelet[2774]: I0904 00:09:27.302072 2774 scope.go:117] "RemoveContainer" containerID="6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba" Sep 4 00:09:27.317853 containerd[1569]: time="2025-09-04T00:09:27.317371874Z" level=info msg="RemoveContainer for \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\"" Sep 4 00:09:27.326636 systemd[1]: Removed slice kubepods-burstable-pod20f07355_29b6_4076_83e0_c543cdd328b4.slice - libcontainer container kubepods-burstable-pod20f07355_29b6_4076_83e0_c543cdd328b4.slice. Sep 4 00:09:27.326845 systemd[1]: kubepods-burstable-pod20f07355_29b6_4076_83e0_c543cdd328b4.slice: Consumed 10.432s CPU time, 125.6M memory peak, 144K read from disk, 13.3M written to disk. Sep 4 00:09:27.333585 containerd[1569]: time="2025-09-04T00:09:27.333462369Z" level=info msg="RemoveContainer for \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" returns successfully" Sep 4 00:09:27.334324 kubelet[2774]: I0904 00:09:27.334266 2774 scope.go:117] "RemoveContainer" containerID="abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8" Sep 4 00:09:27.342227 containerd[1569]: time="2025-09-04T00:09:27.341292372Z" level=info msg="RemoveContainer for \"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\"" Sep 4 00:09:27.355508 containerd[1569]: time="2025-09-04T00:09:27.355416919Z" level=info msg="RemoveContainer for \"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\" returns successfully" Sep 4 00:09:27.357079 kubelet[2774]: I0904 00:09:27.356945 2774 scope.go:117] "RemoveContainer" containerID="5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4" Sep 4 00:09:27.363777 containerd[1569]: time="2025-09-04T00:09:27.363707181Z" level=info msg="RemoveContainer for \"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\"" Sep 4 00:09:27.370279 containerd[1569]: time="2025-09-04T00:09:27.370177816Z" level=info msg="RemoveContainer for \"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\" returns successfully" Sep 4 00:09:27.370790 kubelet[2774]: I0904 00:09:27.370576 2774 scope.go:117] "RemoveContainer" containerID="7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00" Sep 4 00:09:27.373606 containerd[1569]: time="2025-09-04T00:09:27.373557646Z" level=info msg="RemoveContainer for \"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\"" Sep 4 00:09:27.378472 containerd[1569]: time="2025-09-04T00:09:27.378359991Z" level=info msg="RemoveContainer for \"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\" returns successfully" Sep 4 00:09:27.378899 kubelet[2774]: I0904 00:09:27.378850 2774 scope.go:117] "RemoveContainer" containerID="d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984" Sep 4 00:09:27.383431 containerd[1569]: time="2025-09-04T00:09:27.383346255Z" level=info msg="RemoveContainer for \"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\"" Sep 4 00:09:27.389109 containerd[1569]: time="2025-09-04T00:09:27.388949416Z" level=info msg="RemoveContainer for \"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\" returns successfully" Sep 4 00:09:27.389437 kubelet[2774]: I0904 00:09:27.389383 2774 scope.go:117] "RemoveContainer" containerID="6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba" Sep 4 00:09:27.389896 containerd[1569]: time="2025-09-04T00:09:27.389832426Z" level=error msg="ContainerStatus for \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\": not found" Sep 4 00:09:27.390181 kubelet[2774]: E0904 00:09:27.390131 2774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\": not found" containerID="6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba" Sep 4 00:09:27.390284 kubelet[2774]: I0904 00:09:27.390187 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba"} err="failed to get container status \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"6011b0cd4727d7e1354fd879f3c1ae1232592769513c0ab2ea425488d8bb04ba\": not found" Sep 4 00:09:27.390284 kubelet[2774]: I0904 00:09:27.390229 2774 scope.go:117] "RemoveContainer" containerID="abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8" Sep 4 00:09:27.390919 containerd[1569]: time="2025-09-04T00:09:27.390695896Z" level=error msg="ContainerStatus for \"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\": not found" Sep 4 00:09:27.391210 kubelet[2774]: E0904 00:09:27.391164 2774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\": not found" containerID="abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8" Sep 4 00:09:27.391320 kubelet[2774]: I0904 00:09:27.391247 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8"} err="failed to get container status \"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"abe855b62059ccc688addb6cb34328557e434e82e91950cd10fc3e9a1be361a8\": not found" Sep 4 00:09:27.391320 kubelet[2774]: I0904 00:09:27.391284 2774 scope.go:117] "RemoveContainer" containerID="5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4" Sep 4 00:09:27.392065 containerd[1569]: time="2025-09-04T00:09:27.391979911Z" level=error msg="ContainerStatus for \"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\": not found" Sep 4 00:09:27.392427 kubelet[2774]: E0904 00:09:27.392350 2774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\": not found" containerID="5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4" Sep 4 00:09:27.392517 kubelet[2774]: I0904 00:09:27.392420 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4"} err="failed to get container status \"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"5179b76187fba35fd693586c7a43ed15241bd2ed48b827cff647d2c01fe079f4\": not found" Sep 4 00:09:27.392517 kubelet[2774]: I0904 00:09:27.392490 2774 scope.go:117] "RemoveContainer" containerID="7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00" Sep 4 00:09:27.393044 containerd[1569]: time="2025-09-04T00:09:27.392947352Z" level=error msg="ContainerStatus for \"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\": not found" Sep 4 00:09:27.393428 kubelet[2774]: E0904 00:09:27.393325 2774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\": not found" containerID="7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00" Sep 4 00:09:27.393539 kubelet[2774]: I0904 00:09:27.393425 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00"} err="failed to get container status \"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c62c6a7871c8e1bee20696dfb131f5bc30af8e31358a851cc798371d1895c00\": not found" Sep 4 00:09:27.393539 kubelet[2774]: I0904 00:09:27.393460 2774 scope.go:117] "RemoveContainer" containerID="d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984" Sep 4 00:09:27.393935 containerd[1569]: time="2025-09-04T00:09:27.393866567Z" level=error msg="ContainerStatus for \"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\": not found" Sep 4 00:09:27.394486 kubelet[2774]: E0904 00:09:27.394432 2774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\": not found" containerID="d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984" Sep 4 00:09:27.394595 kubelet[2774]: I0904 00:09:27.394481 2774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984"} err="failed to get container status \"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3387bf23cb2f97d2d73da8b083ba427e8053bb55b9b85670dda29014941d984\": not found" Sep 4 00:09:27.581228 systemd[1]: var-lib-kubelet-pods-acfc7f93\x2daa2e\x2d4886\x2dba84\x2d59e875a7a960-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxrm79.mount: Deactivated successfully. Sep 4 00:09:27.581538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-414b7e1a3ac7ae2197b60aec497e1f0e7815bc8908100f4e403f62de93f20669-shm.mount: Deactivated successfully. Sep 4 00:09:27.581665 systemd[1]: var-lib-kubelet-pods-20f07355\x2d29b6\x2d4076\x2d83e0\x2dc543cdd328b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db8fsb.mount: Deactivated successfully. Sep 4 00:09:27.581791 systemd[1]: var-lib-kubelet-pods-20f07355\x2d29b6\x2d4076\x2d83e0\x2dc543cdd328b4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 00:09:27.581896 systemd[1]: var-lib-kubelet-pods-20f07355\x2d29b6\x2d4076\x2d83e0\x2dc543cdd328b4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 00:09:27.693352 kubelet[2774]: I0904 00:09:27.693278 2774 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f07355-29b6-4076-83e0-c543cdd328b4" path="/var/lib/kubelet/pods/20f07355-29b6-4076-83e0-c543cdd328b4/volumes" Sep 4 00:09:27.694283 kubelet[2774]: I0904 00:09:27.694243 2774 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acfc7f93-aa2e-4886-ba84-59e875a7a960" path="/var/lib/kubelet/pods/acfc7f93-aa2e-4886-ba84-59e875a7a960/volumes" Sep 4 00:09:27.863993 kubelet[2774]: E0904 00:09:27.863922 2774 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 00:09:28.430510 sshd[4349]: Connection closed by 147.75.109.163 port 58788 Sep 4 00:09:28.431567 sshd-session[4347]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:28.437497 systemd[1]: sshd@28-10.128.0.81:22-147.75.109.163:58788.service: Deactivated successfully. Sep 4 00:09:28.440804 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 00:09:28.441341 systemd[1]: session-26.scope: Consumed 1.047s CPU time, 26M memory peak. Sep 4 00:09:28.443777 systemd-logind[1486]: Session 26 logged out. Waiting for processes to exit. Sep 4 00:09:28.446863 systemd-logind[1486]: Removed session 26. Sep 4 00:09:28.487038 systemd[1]: Started sshd@29-10.128.0.81:22-147.75.109.163:58794.service - OpenSSH per-connection server daemon (147.75.109.163:58794). Sep 4 00:09:28.716674 ntpd[1480]: Deleting interface #11 lxc_health, fe80::b450:f1ff:fe8e:4e7a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=118 secs Sep 4 00:09:28.717329 ntpd[1480]: 4 Sep 00:09:28 ntpd[1480]: Deleting interface #11 lxc_health, fe80::b450:f1ff:fe8e:4e7a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=118 secs Sep 4 00:09:28.801661 sshd[4499]: Accepted publickey for core from 147.75.109.163 port 58794 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:28.803701 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:28.810222 systemd-logind[1486]: New session 27 of user core. Sep 4 00:09:28.829409 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 00:09:30.281033 kubelet[2774]: I0904 00:09:30.280763 2774 memory_manager.go:355] "RemoveStaleState removing state" podUID="20f07355-29b6-4076-83e0-c543cdd328b4" containerName="cilium-agent" Sep 4 00:09:30.281033 kubelet[2774]: I0904 00:09:30.280811 2774 memory_manager.go:355] "RemoveStaleState removing state" podUID="acfc7f93-aa2e-4886-ba84-59e875a7a960" containerName="cilium-operator" Sep 4 00:09:30.295039 sshd[4502]: Connection closed by 147.75.109.163 port 58794 Sep 4 00:09:30.297971 systemd[1]: Created slice kubepods-burstable-podf934c75b_5027_4c86_8e5f_284d5805a0f6.slice - libcontainer container kubepods-burstable-podf934c75b_5027_4c86_8e5f_284d5805a0f6.slice. Sep 4 00:09:30.298307 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:30.321752 systemd[1]: sshd@29-10.128.0.81:22-147.75.109.163:58794.service: Deactivated successfully. Sep 4 00:09:30.327733 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 00:09:30.328893 systemd[1]: session-27.scope: Consumed 1.239s CPU time, 23.7M memory peak. Sep 4 00:09:30.332563 systemd-logind[1486]: Session 27 logged out. Waiting for processes to exit. Sep 4 00:09:30.359609 systemd[1]: Started sshd@30-10.128.0.81:22-147.75.109.163:48172.service - OpenSSH per-connection server daemon (147.75.109.163:48172). Sep 4 00:09:30.361755 systemd-logind[1486]: Removed session 27. Sep 4 00:09:30.434073 kubelet[2774]: I0904 00:09:30.433921 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f934c75b-5027-4c86-8e5f-284d5805a0f6-xtables-lock\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434347 kubelet[2774]: I0904 00:09:30.434148 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f934c75b-5027-4c86-8e5f-284d5805a0f6-cilium-config-path\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434347 kubelet[2774]: I0904 00:09:30.434279 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f934c75b-5027-4c86-8e5f-284d5805a0f6-host-proc-sys-kernel\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434347 kubelet[2774]: I0904 00:09:30.434321 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f934c75b-5027-4c86-8e5f-284d5805a0f6-hostproc\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434606 kubelet[2774]: I0904 00:09:30.434352 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5bxv\" (UniqueName: \"kubernetes.io/projected/f934c75b-5027-4c86-8e5f-284d5805a0f6-kube-api-access-n5bxv\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434606 kubelet[2774]: I0904 00:09:30.434387 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f934c75b-5027-4c86-8e5f-284d5805a0f6-cilium-run\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434606 kubelet[2774]: I0904 00:09:30.434417 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f934c75b-5027-4c86-8e5f-284d5805a0f6-host-proc-sys-net\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434606 kubelet[2774]: I0904 00:09:30.434453 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f934c75b-5027-4c86-8e5f-284d5805a0f6-cilium-cgroup\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434606 kubelet[2774]: I0904 00:09:30.434487 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f934c75b-5027-4c86-8e5f-284d5805a0f6-lib-modules\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434606 kubelet[2774]: I0904 00:09:30.434520 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f934c75b-5027-4c86-8e5f-284d5805a0f6-clustermesh-secrets\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434993 kubelet[2774]: I0904 00:09:30.434561 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f934c75b-5027-4c86-8e5f-284d5805a0f6-cni-path\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434993 kubelet[2774]: I0904 00:09:30.434593 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f934c75b-5027-4c86-8e5f-284d5805a0f6-etc-cni-netd\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434993 kubelet[2774]: I0904 00:09:30.434631 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f934c75b-5027-4c86-8e5f-284d5805a0f6-bpf-maps\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434993 kubelet[2774]: I0904 00:09:30.434666 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f934c75b-5027-4c86-8e5f-284d5805a0f6-cilium-ipsec-secrets\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.434993 kubelet[2774]: I0904 00:09:30.434699 2774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f934c75b-5027-4c86-8e5f-284d5805a0f6-hubble-tls\") pod \"cilium-dxxhn\" (UID: \"f934c75b-5027-4c86-8e5f-284d5805a0f6\") " pod="kube-system/cilium-dxxhn" Sep 4 00:09:30.619638 containerd[1569]: time="2025-09-04T00:09:30.619487138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxxhn,Uid:f934c75b-5027-4c86-8e5f-284d5805a0f6,Namespace:kube-system,Attempt:0,}" Sep 4 00:09:30.657529 containerd[1569]: time="2025-09-04T00:09:30.657306148Z" level=info msg="connecting to shim 9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1" address="unix:///run/containerd/s/e0bae6f9b0fca79a6c96bd6803c13c6acf1b9f079de06bdfbbf96a80dfa7010c" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:09:30.700360 systemd[1]: Started cri-containerd-9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1.scope - libcontainer container 9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1. Sep 4 00:09:30.709544 sshd[4513]: Accepted publickey for core from 147.75.109.163 port 48172 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:30.713280 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:30.724659 systemd-logind[1486]: New session 28 of user core. Sep 4 00:09:30.740389 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 00:09:30.770968 containerd[1569]: time="2025-09-04T00:09:30.770908809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxxhn,Uid:f934c75b-5027-4c86-8e5f-284d5805a0f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1\"" Sep 4 00:09:30.776607 containerd[1569]: time="2025-09-04T00:09:30.776558178Z" level=info msg="CreateContainer within sandbox \"9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 00:09:30.786077 containerd[1569]: time="2025-09-04T00:09:30.785828345Z" level=info msg="Container 889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:09:30.795115 containerd[1569]: time="2025-09-04T00:09:30.794970893Z" level=info msg="CreateContainer within sandbox \"9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a\"" Sep 4 00:09:30.796051 containerd[1569]: time="2025-09-04T00:09:30.795989419Z" level=info msg="StartContainer for \"889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a\"" Sep 4 00:09:30.799326 containerd[1569]: time="2025-09-04T00:09:30.799196153Z" level=info msg="connecting to shim 889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a" address="unix:///run/containerd/s/e0bae6f9b0fca79a6c96bd6803c13c6acf1b9f079de06bdfbbf96a80dfa7010c" protocol=ttrpc version=3 Sep 4 00:09:30.830297 systemd[1]: Started cri-containerd-889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a.scope - libcontainer container 889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a. Sep 4 00:09:30.887562 containerd[1569]: time="2025-09-04T00:09:30.887317553Z" level=info msg="StartContainer for \"889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a\" returns successfully" Sep 4 00:09:30.904243 systemd[1]: cri-containerd-889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a.scope: Deactivated successfully. Sep 4 00:09:30.908737 containerd[1569]: time="2025-09-04T00:09:30.908676764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a\" id:\"889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a\" pid:4578 exited_at:{seconds:1756944570 nanos:907279323}" Sep 4 00:09:30.908737 containerd[1569]: time="2025-09-04T00:09:30.908680970Z" level=info msg="received exit event container_id:\"889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a\" id:\"889f0367c101091b74d436bb46b9454987aeccad30b1adfee982c1942dd4987a\" pid:4578 exited_at:{seconds:1756944570 nanos:907279323}" Sep 4 00:09:30.935292 sshd[4558]: Connection closed by 147.75.109.163 port 48172 Sep 4 00:09:30.937306 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:30.945891 systemd[1]: sshd@30-10.128.0.81:22-147.75.109.163:48172.service: Deactivated successfully. Sep 4 00:09:30.950922 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 00:09:30.956712 systemd-logind[1486]: Session 28 logged out. Waiting for processes to exit. Sep 4 00:09:30.958988 systemd-logind[1486]: Removed session 28. Sep 4 00:09:30.978755 kubelet[2774]: I0904 00:09:30.978687 2774 setters.go:602] "Node became not ready" node="ci-4372-1-0-nightly-20250903-2100-c1ca5efa03b32eeb2532" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T00:09:30Z","lastTransitionTime":"2025-09-04T00:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 00:09:31.000561 systemd[1]: Started sshd@31-10.128.0.81:22-147.75.109.163:48178.service - OpenSSH per-connection server daemon (147.75.109.163:48178). Sep 4 00:09:31.306356 sshd[4618]: Accepted publickey for core from 147.75.109.163 port 48178 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:09:31.308684 sshd-session[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:09:31.316656 systemd-logind[1486]: New session 29 of user core. Sep 4 00:09:31.322282 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 00:09:31.338537 containerd[1569]: time="2025-09-04T00:09:31.338466500Z" level=info msg="CreateContainer within sandbox \"9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 00:09:31.351033 containerd[1569]: time="2025-09-04T00:09:31.350407524Z" level=info msg="Container 8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:09:31.361855 containerd[1569]: time="2025-09-04T00:09:31.361780507Z" level=info msg="CreateContainer within sandbox \"9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608\"" Sep 4 00:09:31.363261 containerd[1569]: time="2025-09-04T00:09:31.363167131Z" level=info msg="StartContainer for \"8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608\"" Sep 4 00:09:31.365415 containerd[1569]: time="2025-09-04T00:09:31.365338696Z" level=info msg="connecting to shim 8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608" address="unix:///run/containerd/s/e0bae6f9b0fca79a6c96bd6803c13c6acf1b9f079de06bdfbbf96a80dfa7010c" protocol=ttrpc version=3 Sep 4 00:09:31.396332 systemd[1]: Started cri-containerd-8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608.scope - libcontainer container 8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608. Sep 4 00:09:31.461394 containerd[1569]: time="2025-09-04T00:09:31.461305473Z" level=info msg="StartContainer for \"8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608\" returns successfully" Sep 4 00:09:31.468280 systemd[1]: cri-containerd-8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608.scope: Deactivated successfully. Sep 4 00:09:31.471894 containerd[1569]: time="2025-09-04T00:09:31.471831548Z" level=info msg="received exit event container_id:\"8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608\" id:\"8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608\" pid:4635 exited_at:{seconds:1756944571 nanos:469114331}" Sep 4 00:09:31.473180 containerd[1569]: time="2025-09-04T00:09:31.472515437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608\" id:\"8bf8efeedb88892b607e6922147d1dc0ed4c56d858f5e2c91f35939f13165608\" pid:4635 exited_at:{seconds:1756944571 nanos:469114331}" Sep 4 00:09:32.334335 containerd[1569]: time="2025-09-04T00:09:32.333370559Z" level=info msg="CreateContainer within sandbox \"9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 00:09:32.351965 containerd[1569]: time="2025-09-04T00:09:32.351812251Z" level=info msg="Container f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:09:32.374829 containerd[1569]: time="2025-09-04T00:09:32.374750035Z" level=info msg="CreateContainer within sandbox \"9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e\"" Sep 4 00:09:32.376589 containerd[1569]: time="2025-09-04T00:09:32.376462150Z" level=info msg="StartContainer for \"f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e\"" Sep 4 00:09:32.381448 containerd[1569]: time="2025-09-04T00:09:32.381384095Z" level=info msg="connecting to shim f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e" address="unix:///run/containerd/s/e0bae6f9b0fca79a6c96bd6803c13c6acf1b9f079de06bdfbbf96a80dfa7010c" protocol=ttrpc version=3 Sep 4 00:09:32.446365 systemd[1]: Started cri-containerd-f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e.scope - libcontainer container f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e. Sep 4 00:09:32.535618 systemd[1]: cri-containerd-f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e.scope: Deactivated successfully. Sep 4 00:09:32.541499 containerd[1569]: time="2025-09-04T00:09:32.541435220Z" level=info msg="StartContainer for \"f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e\" returns successfully" Sep 4 00:09:32.548197 containerd[1569]: time="2025-09-04T00:09:32.548111486Z" level=info msg="received exit event container_id:\"f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e\" id:\"f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e\" pid:4684 exited_at:{seconds:1756944572 nanos:546075670}" Sep 4 00:09:32.551279 containerd[1569]: time="2025-09-04T00:09:32.551178635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e\" id:\"f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e\" pid:4684 exited_at:{seconds:1756944572 nanos:546075670}" Sep 4 00:09:32.609462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8434810b3ac3faf9500664a5ed7136870d3fd3bd8433f2bb6609842c7dbd72e-rootfs.mount: Deactivated successfully. Sep 4 00:09:32.866241 kubelet[2774]: E0904 00:09:32.865931 2774 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 00:09:33.341604 containerd[1569]: time="2025-09-04T00:09:33.341332605Z" level=info msg="CreateContainer within sandbox \"9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 00:09:33.360662 containerd[1569]: time="2025-09-04T00:09:33.357258269Z" level=info msg="Container c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:09:33.373573 containerd[1569]: time="2025-09-04T00:09:33.373503593Z" level=info msg="CreateContainer within sandbox \"9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5\"" Sep 4 00:09:33.375712 containerd[1569]: time="2025-09-04T00:09:33.374273014Z" level=info msg="StartContainer for \"c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5\"" Sep 4 00:09:33.375712 containerd[1569]: time="2025-09-04T00:09:33.375609407Z" level=info msg="connecting to shim c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5" address="unix:///run/containerd/s/e0bae6f9b0fca79a6c96bd6803c13c6acf1b9f079de06bdfbbf96a80dfa7010c" protocol=ttrpc version=3 Sep 4 00:09:33.416285 systemd[1]: Started cri-containerd-c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5.scope - libcontainer container c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5. Sep 4 00:09:33.502717 systemd[1]: cri-containerd-c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5.scope: Deactivated successfully. Sep 4 00:09:33.511145 containerd[1569]: time="2025-09-04T00:09:33.510445427Z" level=info msg="received exit event container_id:\"c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5\" id:\"c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5\" pid:4724 exited_at:{seconds:1756944573 nanos:509743376}" Sep 4 00:09:33.514227 containerd[1569]: time="2025-09-04T00:09:33.514184916Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5\" id:\"c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5\" pid:4724 exited_at:{seconds:1756944573 nanos:509743376}" Sep 4 00:09:33.519417 containerd[1569]: time="2025-09-04T00:09:33.519219337Z" level=info msg="StartContainer for \"c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5\" returns successfully" Sep 4 00:09:33.588887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c629cccbcac4d81d02c60c3350728f28d88287c7d08585d4c00a2b8e52260ce5-rootfs.mount: Deactivated successfully. Sep 4 00:09:34.353222 containerd[1569]: time="2025-09-04T00:09:34.352970198Z" level=info msg="CreateContainer within sandbox \"9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 00:09:34.375042 containerd[1569]: time="2025-09-04T00:09:34.373176553Z" level=info msg="Container e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:09:34.394422 containerd[1569]: time="2025-09-04T00:09:34.394196529Z" level=info msg="CreateContainer within sandbox \"9c003fa021837c78350243e6ceb988ea24d6025dfa10f718b4036acde4645ca1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465\"" Sep 4 00:09:34.397056 containerd[1569]: time="2025-09-04T00:09:34.396927865Z" level=info msg="StartContainer for \"e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465\"" Sep 4 00:09:34.399414 containerd[1569]: time="2025-09-04T00:09:34.399357660Z" level=info msg="connecting to shim e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465" address="unix:///run/containerd/s/e0bae6f9b0fca79a6c96bd6803c13c6acf1b9f079de06bdfbbf96a80dfa7010c" protocol=ttrpc version=3 Sep 4 00:09:34.439696 systemd[1]: Started cri-containerd-e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465.scope - libcontainer container e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465. Sep 4 00:09:34.528832 containerd[1569]: time="2025-09-04T00:09:34.528727822Z" level=info msg="StartContainer for \"e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465\" returns successfully" Sep 4 00:09:34.669844 containerd[1569]: time="2025-09-04T00:09:34.669269571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465\" id:\"86c68035410dea8d7a6653c2f17127c3ba661612818b3fdd2ff60af253d81fab\" pid:4794 exited_at:{seconds:1756944574 nanos:668026828}" Sep 4 00:09:35.122103 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 4 00:09:35.916337 containerd[1569]: time="2025-09-04T00:09:35.916257255Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465\" id:\"a4b2ab24e319d7f2fe3287bc4b986f5f989d85f9cee4c848148156b9fa91b51c\" pid:4870 exit_status:1 exited_at:{seconds:1756944575 nanos:915308138}" Sep 4 00:09:38.119284 containerd[1569]: time="2025-09-04T00:09:38.119221551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465\" id:\"a24f9131e4bf1287ae335ed44fce0a80d9f6986d5afa68fbd919213e4593f03b\" pid:5208 exit_status:1 exited_at:{seconds:1756944578 nanos:118215639}" Sep 4 00:09:38.607269 systemd-networkd[1440]: lxc_health: Link UP Sep 4 00:09:38.608526 systemd-networkd[1440]: lxc_health: Gained carrier Sep 4 00:09:38.694864 kubelet[2774]: I0904 00:09:38.693888 2774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dxxhn" podStartSLOduration=8.693850982 podStartE2EDuration="8.693850982s" podCreationTimestamp="2025-09-04 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:09:35.381364063 +0000 UTC m=+157.981524347" watchObservedRunningTime="2025-09-04 00:09:38.693850982 +0000 UTC m=+161.294011270" Sep 4 00:09:40.417871 containerd[1569]: time="2025-09-04T00:09:40.417804245Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465\" id:\"1d03ea87bc5d76241fe2488db7e6380af989f4924c79155e6128af9a0fe6ce4a\" pid:5339 exited_at:{seconds:1756944580 nanos:416208753}" Sep 4 00:09:40.425592 kubelet[2774]: E0904 00:09:40.425399 2774 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51952->127.0.0.1:38285: write tcp 127.0.0.1:51952->127.0.0.1:38285: write: broken pipe Sep 4 00:09:40.570990 systemd-networkd[1440]: lxc_health: Gained IPv6LL Sep 4 00:09:42.684443 containerd[1569]: time="2025-09-04T00:09:42.684299051Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465\" id:\"09d29d50cfc5a79173d879d537d464feba14f40e0fd4aae1affa70f7697e7d8e\" pid:5370 exited_at:{seconds:1756944582 nanos:682491811}" Sep 4 00:09:42.716712 ntpd[1480]: Listen normally on 14 lxc_health [fe80::fc7a:caff:fe5c:9eab%14]:123 Sep 4 00:09:42.717563 ntpd[1480]: 4 Sep 00:09:42 ntpd[1480]: Listen normally on 14 lxc_health [fe80::fc7a:caff:fe5c:9eab%14]:123 Sep 4 00:09:45.019284 containerd[1569]: time="2025-09-04T00:09:45.018791180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e08296806e4de798a4f40452b12d64d222055425e61d222e50afad507de91465\" id:\"7f550942ea88fa3b6037436ec092cce4246ad359be3284a506275d4dd53f96ec\" pid:5398 exited_at:{seconds:1756944585 nanos:13457780}" Sep 4 00:09:45.072779 sshd[4620]: Connection closed by 147.75.109.163 port 48178 Sep 4 00:09:45.078439 sshd-session[4618]: pam_unix(sshd:session): session closed for user core Sep 4 00:09:45.095422 systemd-logind[1486]: Session 29 logged out. Waiting for processes to exit. Sep 4 00:09:45.096244 systemd[1]: sshd@31-10.128.0.81:22-147.75.109.163:48178.service: Deactivated successfully. Sep 4 00:09:45.106599 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 00:09:45.114539 systemd-logind[1486]: Removed session 29.