Sep 3 23:58:53.287065 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 3 22:05:39 -00 2025 Sep 3 23:58:53.289728 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 3 23:58:53.289785 kernel: BIOS-provided physical RAM map: Sep 3 23:58:53.289800 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 3 23:58:53.289813 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 3 23:58:53.289826 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 3 23:58:53.289852 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 3 23:58:53.289867 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 3 23:58:53.289881 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd329fff] usable Sep 3 23:58:53.289895 kernel: BIOS-e820: [mem 0x00000000bd32a000-0x00000000bd331fff] ACPI data Sep 3 23:58:53.289910 kernel: BIOS-e820: [mem 0x00000000bd332000-0x00000000bf8ecfff] usable Sep 3 23:58:53.289924 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 3 23:58:53.289937 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 3 23:58:53.289951 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 3 23:58:53.289971 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 3 23:58:53.289986 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 3 23:58:53.290001 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 3 23:58:53.290015 kernel: NX (Execute Disable) protection: active Sep 3 23:58:53.290036 kernel: APIC: Static calls initialized Sep 3 23:58:53.290050 kernel: efi: EFI v2.7 by EDK II Sep 3 23:58:53.290077 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32a018 Sep 3 23:58:53.290096 kernel: random: crng init done Sep 3 23:58:53.290111 kernel: secureboot: Secure boot disabled Sep 3 23:58:53.290125 kernel: SMBIOS 2.4 present. Sep 3 23:58:53.290182 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 3 23:58:53.290196 kernel: DMI: Memory slots populated: 1/1 Sep 3 23:58:53.290211 kernel: Hypervisor detected: KVM Sep 3 23:58:53.290225 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 3 23:58:53.290239 kernel: kvm-clock: using sched offset of 16242680821 cycles Sep 3 23:58:53.290255 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 3 23:58:53.290271 kernel: tsc: Detected 2299.998 MHz processor Sep 3 23:58:53.290287 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 3 23:58:53.290309 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 3 23:58:53.290324 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 3 23:58:53.290340 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 3 23:58:53.290356 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 3 23:58:53.290372 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 3 23:58:53.290389 kernel: Using GB pages for direct mapping Sep 3 23:58:53.290405 kernel: ACPI: Early table checksum verification disabled Sep 3 23:58:53.290423 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 3 23:58:53.290449 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 3 23:58:53.290466 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 3 23:58:53.290483 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 3 23:58:53.290500 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 3 23:58:53.290516 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 3 23:58:53.290532 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 3 23:58:53.290552 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 3 23:58:53.290570 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 3 23:58:53.290586 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 3 23:58:53.290602 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 3 23:58:53.290619 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 3 23:58:53.290636 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 3 23:58:53.290652 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 3 23:58:53.290668 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 3 23:58:53.290684 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 3 23:58:53.290704 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 3 23:58:53.290720 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 3 23:58:53.290736 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 3 23:58:53.290753 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 3 23:58:53.290771 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 3 23:58:53.290786 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 3 23:58:53.290803 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 3 23:58:53.290821 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Sep 3 23:58:53.290838 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Sep 3 23:58:53.290860 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Sep 3 23:58:53.290878 kernel: Zone ranges: Sep 3 23:58:53.290895 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 3 23:58:53.290912 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 3 23:58:53.290928 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 3 23:58:53.290945 kernel: Device empty Sep 3 23:58:53.290961 kernel: Movable zone start for each node Sep 3 23:58:53.290979 kernel: Early memory node ranges Sep 3 23:58:53.290995 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 3 23:58:53.291014 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 3 23:58:53.291041 kernel: node 0: [mem 0x0000000000100000-0x00000000bd329fff] Sep 3 23:58:53.291057 kernel: node 0: [mem 0x00000000bd332000-0x00000000bf8ecfff] Sep 3 23:58:53.291073 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 3 23:58:53.291090 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 3 23:58:53.291107 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 3 23:58:53.291124 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 3 23:58:53.293206 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 3 23:58:53.293227 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 3 23:58:53.293254 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Sep 3 23:58:53.293271 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 3 23:58:53.293287 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 3 23:58:53.293303 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 3 23:58:53.293320 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 3 23:58:53.293337 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 3 23:58:53.293355 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 3 23:58:53.293372 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 3 23:58:53.293389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 3 23:58:53.293409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 3 23:58:53.293427 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 3 23:58:53.293444 kernel: CPU topo: Max. logical packages: 1 Sep 3 23:58:53.293461 kernel: CPU topo: Max. logical dies: 1 Sep 3 23:58:53.293478 kernel: CPU topo: Max. dies per package: 1 Sep 3 23:58:53.293496 kernel: CPU topo: Max. threads per core: 2 Sep 3 23:58:53.293514 kernel: CPU topo: Num. cores per package: 1 Sep 3 23:58:53.293531 kernel: CPU topo: Num. threads per package: 2 Sep 3 23:58:53.293548 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 3 23:58:53.293566 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 3 23:58:53.293587 kernel: Booting paravirtualized kernel on KVM Sep 3 23:58:53.293605 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 3 23:58:53.293623 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 3 23:58:53.293641 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 3 23:58:53.293657 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 3 23:58:53.293674 kernel: pcpu-alloc: [0] 0 1 Sep 3 23:58:53.293691 kernel: kvm-guest: PV spinlocks enabled Sep 3 23:58:53.293709 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 3 23:58:53.293730 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 3 23:58:53.293752 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 3 23:58:53.293769 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 3 23:58:53.293786 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 3 23:58:53.293804 kernel: Fallback order for Node 0: 0 Sep 3 23:58:53.293821 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Sep 3 23:58:53.293839 kernel: Policy zone: Normal Sep 3 23:58:53.293856 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 3 23:58:53.293874 kernel: software IO TLB: area num 2. Sep 3 23:58:53.293909 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 3 23:58:53.293928 kernel: Kernel/User page tables isolation: enabled Sep 3 23:58:53.293950 kernel: ftrace: allocating 40099 entries in 157 pages Sep 3 23:58:53.293967 kernel: ftrace: allocated 157 pages with 5 groups Sep 3 23:58:53.293986 kernel: Dynamic Preempt: voluntary Sep 3 23:58:53.294004 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 3 23:58:53.294023 kernel: rcu: RCU event tracing is enabled. Sep 3 23:58:53.294050 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 3 23:58:53.294069 kernel: Trampoline variant of Tasks RCU enabled. Sep 3 23:58:53.294092 kernel: Rude variant of Tasks RCU enabled. Sep 3 23:58:53.294110 kernel: Tracing variant of Tasks RCU enabled. Sep 3 23:58:53.294151 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 3 23:58:53.294170 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 3 23:58:53.294188 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:58:53.294207 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:58:53.294226 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:58:53.294250 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 3 23:58:53.294268 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 3 23:58:53.294287 kernel: Console: colour dummy device 80x25 Sep 3 23:58:53.294305 kernel: printk: legacy console [ttyS0] enabled Sep 3 23:58:53.294323 kernel: ACPI: Core revision 20240827 Sep 3 23:58:53.294342 kernel: APIC: Switch to symmetric I/O mode setup Sep 3 23:58:53.294361 kernel: x2apic enabled Sep 3 23:58:53.294379 kernel: APIC: Switched APIC routing to: physical x2apic Sep 3 23:58:53.294398 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 3 23:58:53.294417 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 3 23:58:53.294440 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 3 23:58:53.294458 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 3 23:58:53.294477 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 3 23:58:53.294495 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 3 23:58:53.294514 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 3 23:58:53.294533 kernel: Spectre V2 : Mitigation: IBRS Sep 3 23:58:53.294551 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 3 23:58:53.294569 kernel: RETBleed: Mitigation: IBRS Sep 3 23:58:53.294591 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 3 23:58:53.294610 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 3 23:58:53.294629 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 3 23:58:53.294647 kernel: MDS: Mitigation: Clear CPU buffers Sep 3 23:58:53.294666 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 3 23:58:53.294685 kernel: active return thunk: its_return_thunk Sep 3 23:58:53.294703 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 3 23:58:53.294721 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 3 23:58:53.294740 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 3 23:58:53.294762 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 3 23:58:53.294780 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 3 23:58:53.294799 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 3 23:58:53.294818 kernel: Freeing SMP alternatives memory: 32K Sep 3 23:58:53.294837 kernel: pid_max: default: 32768 minimum: 301 Sep 3 23:58:53.294855 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 3 23:58:53.294874 kernel: landlock: Up and running. Sep 3 23:58:53.294892 kernel: SELinux: Initializing. Sep 3 23:58:53.294911 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 3 23:58:53.294934 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 3 23:58:53.294953 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 3 23:58:53.294972 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 3 23:58:53.294990 kernel: signal: max sigframe size: 1776 Sep 3 23:58:53.295008 kernel: rcu: Hierarchical SRCU implementation. Sep 3 23:58:53.295027 kernel: rcu: Max phase no-delay instances is 400. Sep 3 23:58:53.295052 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 3 23:58:53.295072 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 3 23:58:53.295090 kernel: smp: Bringing up secondary CPUs ... Sep 3 23:58:53.295113 kernel: smpboot: x86: Booting SMP configuration: Sep 3 23:58:53.297378 kernel: .... node #0, CPUs: #1 Sep 3 23:58:53.297407 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 3 23:58:53.297548 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 3 23:58:53.297571 kernel: smp: Brought up 1 node, 2 CPUs Sep 3 23:58:53.297590 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 3 23:58:53.297611 kernel: Memory: 7566072K/7860552K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 288656K reserved, 0K cma-reserved) Sep 3 23:58:53.297630 kernel: devtmpfs: initialized Sep 3 23:58:53.297658 kernel: x86/mm: Memory block size: 128MB Sep 3 23:58:53.297807 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 3 23:58:53.297827 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 3 23:58:53.297847 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 3 23:58:53.297866 kernel: pinctrl core: initialized pinctrl subsystem Sep 3 23:58:53.297933 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 3 23:58:53.297951 kernel: audit: initializing netlink subsys (disabled) Sep 3 23:58:53.297971 kernel: audit: type=2000 audit(1756943926.963:1): state=initialized audit_enabled=0 res=1 Sep 3 23:58:53.297990 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 3 23:58:53.298015 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 3 23:58:53.298041 kernel: cpuidle: using governor menu Sep 3 23:58:53.298060 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 3 23:58:53.298079 kernel: dca service started, version 1.12.1 Sep 3 23:58:53.298099 kernel: PCI: Using configuration type 1 for base access Sep 3 23:58:53.298118 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 3 23:58:53.298153 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 3 23:58:53.298172 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 3 23:58:53.298190 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 3 23:58:53.298211 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 3 23:58:53.298230 kernel: ACPI: Added _OSI(Module Device) Sep 3 23:58:53.298249 kernel: ACPI: Added _OSI(Processor Device) Sep 3 23:58:53.298268 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 3 23:58:53.298287 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 3 23:58:53.298306 kernel: ACPI: Interpreter enabled Sep 3 23:58:53.298325 kernel: ACPI: PM: (supports S0 S3 S5) Sep 3 23:58:53.298344 kernel: ACPI: Using IOAPIC for interrupt routing Sep 3 23:58:53.298363 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 3 23:58:53.298385 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 3 23:58:53.298404 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 3 23:58:53.298423 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 3 23:58:53.298714 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 3 23:58:53.299064 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 3 23:58:53.301375 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 3 23:58:53.301413 kernel: PCI host bridge to bus 0000:00 Sep 3 23:58:53.301611 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 3 23:58:53.301787 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 3 23:58:53.301957 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 3 23:58:53.302152 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 3 23:58:53.302322 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 3 23:58:53.302529 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 3 23:58:53.302732 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 3 23:58:53.302936 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 3 23:58:53.304239 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 3 23:58:53.304480 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Sep 3 23:58:53.304671 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Sep 3 23:58:53.304856 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Sep 3 23:58:53.305063 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 3 23:58:53.305291 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Sep 3 23:58:53.305484 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Sep 3 23:58:53.305688 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 3 23:58:53.306396 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Sep 3 23:58:53.306702 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Sep 3 23:58:53.306731 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 3 23:58:53.306803 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 3 23:58:53.306837 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 3 23:58:53.306857 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 3 23:58:53.306878 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 3 23:58:53.306900 kernel: iommu: Default domain type: Translated Sep 3 23:58:53.306922 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 3 23:58:53.306943 kernel: efivars: Registered efivars operations Sep 3 23:58:53.306964 kernel: PCI: Using ACPI for IRQ routing Sep 3 23:58:53.306986 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 3 23:58:53.307006 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 3 23:58:53.307034 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 3 23:58:53.307055 kernel: e820: reserve RAM buffer [mem 0xbd32a000-0xbfffffff] Sep 3 23:58:53.307083 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 3 23:58:53.307104 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 3 23:58:53.307125 kernel: vgaarb: loaded Sep 3 23:58:53.307165 kernel: clocksource: Switched to clocksource kvm-clock Sep 3 23:58:53.307183 kernel: VFS: Disk quotas dquot_6.6.0 Sep 3 23:58:53.307202 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 3 23:58:53.307220 kernel: pnp: PnP ACPI init Sep 3 23:58:53.307247 kernel: pnp: PnP ACPI: found 7 devices Sep 3 23:58:53.309191 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 3 23:58:53.309224 kernel: NET: Registered PF_INET protocol family Sep 3 23:58:53.309247 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 3 23:58:53.309269 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 3 23:58:53.309290 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 3 23:58:53.309312 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 3 23:58:53.309333 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 3 23:58:53.309354 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 3 23:58:53.309385 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 3 23:58:53.309407 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 3 23:58:53.309428 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 3 23:58:53.309449 kernel: NET: Registered PF_XDP protocol family Sep 3 23:58:53.309722 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 3 23:58:53.309936 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 3 23:58:53.310172 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 3 23:58:53.310493 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 3 23:58:53.310719 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 3 23:58:53.310746 kernel: PCI: CLS 0 bytes, default 64 Sep 3 23:58:53.310765 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 3 23:58:53.310782 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 3 23:58:53.310800 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 3 23:58:53.310820 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 3 23:58:53.310838 kernel: clocksource: Switched to clocksource tsc Sep 3 23:58:53.310857 kernel: Initialise system trusted keyrings Sep 3 23:58:53.310881 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 3 23:58:53.310900 kernel: Key type asymmetric registered Sep 3 23:58:53.310919 kernel: Asymmetric key parser 'x509' registered Sep 3 23:58:53.310938 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 3 23:58:53.310956 kernel: io scheduler mq-deadline registered Sep 3 23:58:53.310974 kernel: io scheduler kyber registered Sep 3 23:58:53.310993 kernel: io scheduler bfq registered Sep 3 23:58:53.311034 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 3 23:58:53.311054 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 3 23:58:53.311290 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 3 23:58:53.311316 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 3 23:58:53.311502 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 3 23:58:53.311526 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 3 23:58:53.311712 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 3 23:58:53.311737 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 3 23:58:53.311756 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 3 23:58:53.311774 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 3 23:58:53.311799 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 3 23:58:53.311817 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 3 23:58:53.312018 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 3 23:58:53.312044 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 3 23:58:53.312064 kernel: i8042: Warning: Keylock active Sep 3 23:58:53.312082 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 3 23:58:53.312101 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 3 23:58:53.312899 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 3 23:58:53.313106 kernel: rtc_cmos 00:00: registered as rtc0 Sep 3 23:58:53.313303 kernel: rtc_cmos 00:00: setting system clock to 2025-09-03T23:58:52 UTC (1756943932) Sep 3 23:58:53.313475 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 3 23:58:53.313497 kernel: intel_pstate: CPU model not supported Sep 3 23:58:53.313515 kernel: pstore: Using crash dump compression: deflate Sep 3 23:58:53.314999 kernel: pstore: Registered efi_pstore as persistent store backend Sep 3 23:58:53.315035 kernel: NET: Registered PF_INET6 protocol family Sep 3 23:58:53.315055 kernel: Segment Routing with IPv6 Sep 3 23:58:53.315080 kernel: In-situ OAM (IOAM) with IPv6 Sep 3 23:58:53.315099 kernel: NET: Registered PF_PACKET protocol family Sep 3 23:58:53.315118 kernel: Key type dns_resolver registered Sep 3 23:58:53.315152 kernel: IPI shorthand broadcast: enabled Sep 3 23:58:53.315167 kernel: sched_clock: Marking stable (4314005343, 1101395877)->(5844542677, -429141457) Sep 3 23:58:53.315182 kernel: registered taskstats version 1 Sep 3 23:58:53.315199 kernel: Loading compiled-in X.509 certificates Sep 3 23:58:53.315217 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 247a8159a15e16f8eb89737aa66cd9cf9bbb3c10' Sep 3 23:58:53.315234 kernel: Demotion targets for Node 0: null Sep 3 23:58:53.315255 kernel: Key type .fscrypt registered Sep 3 23:58:53.315272 kernel: Key type fscrypt-provisioning registered Sep 3 23:58:53.315290 kernel: ima: Allocated hash algorithm: sha1 Sep 3 23:58:53.315308 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 3 23:58:53.315327 kernel: ima: No architecture policies found Sep 3 23:58:53.315344 kernel: clk: Disabling unused clocks Sep 3 23:58:53.315360 kernel: Warning: unable to open an initial console. Sep 3 23:58:53.315376 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 3 23:58:53.315395 kernel: Write protecting the kernel read-only data: 24576k Sep 3 23:58:53.315417 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 3 23:58:53.315435 kernel: Run /init as init process Sep 3 23:58:53.315453 kernel: with arguments: Sep 3 23:58:53.315469 kernel: /init Sep 3 23:58:53.315485 kernel: with environment: Sep 3 23:58:53.315502 kernel: HOME=/ Sep 3 23:58:53.315517 kernel: TERM=linux Sep 3 23:58:53.315533 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 3 23:58:53.315551 systemd[1]: Successfully made /usr/ read-only. Sep 3 23:58:53.315577 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:58:53.315596 systemd[1]: Detected virtualization google. Sep 3 23:58:53.315615 systemd[1]: Detected architecture x86-64. Sep 3 23:58:53.315631 systemd[1]: Running in initrd. Sep 3 23:58:53.315648 systemd[1]: No hostname configured, using default hostname. Sep 3 23:58:53.315669 systemd[1]: Hostname set to . Sep 3 23:58:53.315688 systemd[1]: Initializing machine ID from random generator. Sep 3 23:58:53.315712 systemd[1]: Queued start job for default target initrd.target. Sep 3 23:58:53.315751 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:58:53.315774 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:58:53.315796 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 3 23:58:53.315817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:58:53.315838 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 3 23:58:53.315864 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 3 23:58:53.315887 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 3 23:58:53.315908 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 3 23:58:53.315928 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:58:53.315948 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:58:53.315969 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:58:53.315992 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:58:53.316029 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:58:53.316051 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:58:53.316072 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:58:53.316094 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:58:53.316116 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 3 23:58:53.316187 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 3 23:58:53.316209 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:58:53.316228 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:58:53.316253 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:58:53.316274 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:58:53.316295 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 3 23:58:53.316316 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:58:53.316336 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 3 23:58:53.316356 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 3 23:58:53.316377 systemd[1]: Starting systemd-fsck-usr.service... Sep 3 23:58:53.316397 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:58:53.316418 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:58:53.316443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:58:53.316462 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 3 23:58:53.316497 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:58:53.316517 systemd[1]: Finished systemd-fsck-usr.service. Sep 3 23:58:53.316583 systemd-journald[207]: Collecting audit messages is disabled. Sep 3 23:58:53.316638 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:58:53.316666 systemd-journald[207]: Journal started Sep 3 23:58:53.316721 systemd-journald[207]: Runtime Journal (/run/log/journal/2fbf30a677ff4f9889194308198d0af6) is 8M, max 148.9M, 140.9M free. Sep 3 23:58:53.280988 systemd-modules-load[208]: Inserted module 'overlay' Sep 3 23:58:53.320173 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:58:53.328349 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:58:53.333701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:58:53.346836 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 3 23:58:53.353343 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 3 23:58:53.355993 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 3 23:58:53.360336 kernel: Bridge firewalling registered Sep 3 23:58:53.359626 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:58:53.362569 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:58:53.371409 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:58:53.377311 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:58:53.381683 systemd-tmpfiles[221]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 3 23:58:53.398672 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:58:53.406570 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:58:53.414323 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 3 23:58:53.418512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:58:53.428662 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:58:53.436308 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:58:53.461661 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c7fa427551c105672074cbcbe7e23c997f471a6e879d708e8d6cbfad2147666e Sep 3 23:58:53.512468 systemd-resolved[246]: Positive Trust Anchors: Sep 3 23:58:53.513115 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:58:53.513204 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:58:53.519111 systemd-resolved[246]: Defaulting to hostname 'linux'. Sep 3 23:58:53.520883 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:58:53.536590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:58:53.607254 kernel: SCSI subsystem initialized Sep 3 23:58:53.622166 kernel: Loading iSCSI transport class v2.0-870. Sep 3 23:58:53.636176 kernel: iscsi: registered transport (tcp) Sep 3 23:58:53.667003 kernel: iscsi: registered transport (qla4xxx) Sep 3 23:58:53.667095 kernel: QLogic iSCSI HBA Driver Sep 3 23:58:53.695324 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:58:53.719356 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:58:53.727489 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:58:53.803473 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 3 23:58:53.810331 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 3 23:58:53.873230 kernel: raid6: avx2x4 gen() 18186 MB/s Sep 3 23:58:53.890228 kernel: raid6: avx2x2 gen() 23036 MB/s Sep 3 23:58:53.909081 kernel: raid6: avx2x1 gen() 15033 MB/s Sep 3 23:58:53.909201 kernel: raid6: using algorithm avx2x2 gen() 23036 MB/s Sep 3 23:58:53.927502 kernel: raid6: .... xor() 18341 MB/s, rmw enabled Sep 3 23:58:53.927633 kernel: raid6: using avx2x2 recovery algorithm Sep 3 23:58:53.953195 kernel: xor: automatically using best checksumming function avx Sep 3 23:58:54.148176 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 3 23:58:54.158439 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:58:54.167495 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:58:54.211996 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 3 23:58:54.220986 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:58:54.226868 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 3 23:58:54.264127 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Sep 3 23:58:54.306476 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:58:54.316358 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:58:54.419963 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:58:54.431113 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 3 23:58:54.522346 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Sep 3 23:58:54.544171 kernel: cryptd: max_cpu_qlen set to 1000 Sep 3 23:58:54.552630 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 3 23:58:54.567168 kernel: AES CTR mode by8 optimization enabled Sep 3 23:58:54.612456 kernel: scsi host0: Virtio SCSI HBA Sep 3 23:58:54.619221 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 3 23:58:54.717386 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 3 23:58:54.717776 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 3 23:58:54.727667 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:58:54.734329 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 3 23:58:54.734780 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 3 23:58:54.735025 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 3 23:58:54.728169 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:58:54.737056 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:58:54.756186 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 3 23:58:54.756228 kernel: GPT:17805311 != 25165823 Sep 3 23:58:54.756253 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 3 23:58:54.756275 kernel: GPT:17805311 != 25165823 Sep 3 23:58:54.756296 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 3 23:58:54.756317 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 3 23:58:54.756339 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 3 23:58:54.742670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:58:54.759969 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:58:54.809909 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:58:54.872443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 3 23:58:54.879624 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 3 23:58:54.895339 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 3 23:58:54.928949 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 3 23:58:54.933325 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 3 23:58:54.948636 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 3 23:58:54.948998 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:58:54.959374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:58:54.965490 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:58:54.974189 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 3 23:58:54.995350 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 3 23:58:55.009108 disk-uuid[610]: Primary Header is updated. Sep 3 23:58:55.009108 disk-uuid[610]: Secondary Entries is updated. Sep 3 23:58:55.009108 disk-uuid[610]: Secondary Header is updated. Sep 3 23:58:55.030483 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:58:55.039398 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 3 23:58:56.072088 disk-uuid[611]: The operation has completed successfully. Sep 3 23:58:56.076353 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 3 23:58:56.176817 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 3 23:58:56.176975 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 3 23:58:56.239067 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 3 23:58:56.277269 sh[632]: Success Sep 3 23:58:56.308905 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 3 23:58:56.309022 kernel: device-mapper: uevent: version 1.0.3 Sep 3 23:58:56.309052 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 3 23:58:56.325165 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 3 23:58:56.456410 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 3 23:58:56.466252 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 3 23:58:56.490162 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 3 23:58:56.529239 kernel: BTRFS: device fsid 8a9c2e34-3d3c-49a9-acce-59bf90003071 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (644) Sep 3 23:58:56.534943 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9c2e34-3d3c-49a9-acce-59bf90003071 Sep 3 23:58:56.535027 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 3 23:58:56.569896 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 3 23:58:56.569997 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 3 23:58:56.570022 kernel: BTRFS info (device dm-0): enabling free space tree Sep 3 23:58:56.579284 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 3 23:58:56.581029 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:58:56.583752 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 3 23:58:56.585045 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 3 23:58:56.612778 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 3 23:58:56.647203 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (667) Sep 3 23:58:56.652757 kernel: BTRFS info (device sda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 3 23:58:56.652840 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 3 23:58:56.667269 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 3 23:58:56.667383 kernel: BTRFS info (device sda6): turning on async discard Sep 3 23:58:56.667408 kernel: BTRFS info (device sda6): enabling free space tree Sep 3 23:58:56.676178 kernel: BTRFS info (device sda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 3 23:58:56.679264 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 3 23:58:56.687914 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 3 23:58:56.853023 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:58:56.873798 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:58:56.977376 systemd-networkd[813]: lo: Link UP Sep 3 23:58:56.977398 systemd-networkd[813]: lo: Gained carrier Sep 3 23:58:56.980313 systemd-networkd[813]: Enumeration completed Sep 3 23:58:56.980512 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:58:56.981105 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:58:56.981113 systemd-networkd[813]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:58:56.982771 systemd-networkd[813]: eth0: Link UP Sep 3 23:58:56.983083 systemd-networkd[813]: eth0: Gained carrier Sep 3 23:58:57.007702 ignition[730]: Ignition 2.21.0 Sep 3 23:58:56.983102 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:58:57.007712 ignition[730]: Stage: fetch-offline Sep 3 23:58:56.988011 systemd[1]: Reached target network.target - Network. Sep 3 23:58:57.007764 ignition[730]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:58:56.995244 systemd-networkd[813]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436.c.flatcar-212911.internal' to 'ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436' Sep 3 23:58:57.007781 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 3 23:58:56.995265 systemd-networkd[813]: eth0: DHCPv4 address 10.128.0.18/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 3 23:58:57.007982 ignition[730]: parsed url from cmdline: "" Sep 3 23:58:57.011452 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:58:57.007987 ignition[730]: no config URL provided Sep 3 23:58:57.017716 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 3 23:58:57.007997 ignition[730]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:58:57.008012 ignition[730]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:58:57.008026 ignition[730]: failed to fetch config: resource requires networking Sep 3 23:58:57.008575 ignition[730]: Ignition finished successfully Sep 3 23:58:57.072050 ignition[822]: Ignition 2.21.0 Sep 3 23:58:57.072059 ignition[822]: Stage: fetch Sep 3 23:58:57.072270 ignition[822]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:58:57.072282 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 3 23:58:57.091642 unknown[822]: fetched base config from "system" Sep 3 23:58:57.072383 ignition[822]: parsed url from cmdline: "" Sep 3 23:58:57.091655 unknown[822]: fetched base config from "system" Sep 3 23:58:57.072390 ignition[822]: no config URL provided Sep 3 23:58:57.091665 unknown[822]: fetched user config from "gcp" Sep 3 23:58:57.072399 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:58:57.095426 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 3 23:58:57.072412 ignition[822]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:58:57.098742 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 3 23:58:57.072461 ignition[822]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 3 23:58:57.078340 ignition[822]: GET result: OK Sep 3 23:58:57.078453 ignition[822]: parsing config with SHA512: ddbbdf846f9f85fdb68bb0b7e7d75593ba0174cca8f0e0a1163f561718db6d134fc6afd04f46d0eb9db8779824612d819f1d2d763502760d9e5b53871239b9c8 Sep 3 23:58:57.092175 ignition[822]: fetch: fetch complete Sep 3 23:58:57.092184 ignition[822]: fetch: fetch passed Sep 3 23:58:57.092251 ignition[822]: Ignition finished successfully Sep 3 23:58:57.148971 ignition[828]: Ignition 2.21.0 Sep 3 23:58:57.148990 ignition[828]: Stage: kargs Sep 3 23:58:57.153680 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 3 23:58:57.149330 ignition[828]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:58:57.161477 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 3 23:58:57.149349 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 3 23:58:57.151741 ignition[828]: kargs: kargs passed Sep 3 23:58:57.151894 ignition[828]: Ignition finished successfully Sep 3 23:58:57.216045 ignition[834]: Ignition 2.21.0 Sep 3 23:58:57.216065 ignition[834]: Stage: disks Sep 3 23:58:57.216364 ignition[834]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:58:57.219827 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 3 23:58:57.216383 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 3 23:58:57.227347 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 3 23:58:57.218033 ignition[834]: disks: disks passed Sep 3 23:58:57.233570 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 3 23:58:57.218120 ignition[834]: Ignition finished successfully Sep 3 23:58:57.240374 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:58:57.246498 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:58:57.250595 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:58:57.256343 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 3 23:58:57.318177 systemd-fsck[843]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 3 23:58:57.334865 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 3 23:58:57.342622 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 3 23:58:57.555162 kernel: EXT4-fs (sda9): mounted filesystem c3518c93-f823-4477-a620-ff9666a59be5 r/w with ordered data mode. Quota mode: none. Sep 3 23:58:57.555391 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 3 23:58:57.556219 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 3 23:58:57.561730 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:58:57.580515 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 3 23:58:57.583429 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 3 23:58:57.583609 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 3 23:58:57.583696 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:58:57.605173 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (851) Sep 3 23:58:57.610060 kernel: BTRFS info (device sda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 3 23:58:57.610145 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 3 23:58:57.612683 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 3 23:58:57.619777 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 3 23:58:57.619819 kernel: BTRFS info (device sda6): turning on async discard Sep 3 23:58:57.619843 kernel: BTRFS info (device sda6): enabling free space tree Sep 3 23:58:57.617369 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 3 23:58:57.629491 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:58:57.755787 initrd-setup-root[875]: cut: /sysroot/etc/passwd: No such file or directory Sep 3 23:58:57.772626 initrd-setup-root[882]: cut: /sysroot/etc/group: No such file or directory Sep 3 23:58:57.782652 initrd-setup-root[889]: cut: /sysroot/etc/shadow: No such file or directory Sep 3 23:58:57.792169 initrd-setup-root[896]: cut: /sysroot/etc/gshadow: No such file or directory Sep 3 23:58:57.969012 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 3 23:58:57.975375 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 3 23:58:57.982674 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 3 23:58:58.002077 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 3 23:58:58.003845 kernel: BTRFS info (device sda6): last unmount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 3 23:58:58.044466 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 3 23:58:58.049272 ignition[964]: INFO : Ignition 2.21.0 Sep 3 23:58:58.049272 ignition[964]: INFO : Stage: mount Sep 3 23:58:58.054631 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:58:58.054631 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 3 23:58:58.054631 ignition[964]: INFO : mount: mount passed Sep 3 23:58:58.054631 ignition[964]: INFO : Ignition finished successfully Sep 3 23:58:58.057086 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 3 23:58:58.068537 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 3 23:58:58.557723 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:58:58.600292 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (976) Sep 3 23:58:58.604069 kernel: BTRFS info (device sda6): first mount of filesystem 75efd3be-3398-4525-8f67-b36cc847539d Sep 3 23:58:58.604181 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 3 23:58:58.613217 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 3 23:58:58.613329 kernel: BTRFS info (device sda6): turning on async discard Sep 3 23:58:58.613355 kernel: BTRFS info (device sda6): enabling free space tree Sep 3 23:58:58.617265 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:58:58.647287 systemd-networkd[813]: eth0: Gained IPv6LL Sep 3 23:58:58.659880 ignition[993]: INFO : Ignition 2.21.0 Sep 3 23:58:58.659880 ignition[993]: INFO : Stage: files Sep 3 23:58:58.668383 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:58:58.668383 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 3 23:58:58.668383 ignition[993]: DEBUG : files: compiled without relabeling support, skipping Sep 3 23:58:58.668383 ignition[993]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 3 23:58:58.668383 ignition[993]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 3 23:58:58.689375 ignition[993]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 3 23:58:58.689375 ignition[993]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 3 23:58:58.689375 ignition[993]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 3 23:58:58.679445 unknown[993]: wrote ssh authorized keys file for user: core Sep 3 23:58:58.705322 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 3 23:58:58.705322 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 3 23:58:58.840147 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 3 23:58:59.368967 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 3 23:58:59.368967 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:58:59.368967 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 3 23:58:59.651353 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 3 23:58:59.959451 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:58:59.959451 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 3 23:58:59.975352 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 3 23:59:00.328332 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 3 23:59:00.864690 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 3 23:59:00.864690 ignition[993]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 3 23:59:00.877301 ignition[993]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:59:00.877301 ignition[993]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:59:00.877301 ignition[993]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 3 23:59:00.877301 ignition[993]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 3 23:59:00.877301 ignition[993]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 3 23:59:00.877301 ignition[993]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:59:00.877301 ignition[993]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:59:00.877301 ignition[993]: INFO : files: files passed Sep 3 23:59:00.877301 ignition[993]: INFO : Ignition finished successfully Sep 3 23:59:00.878293 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 3 23:59:00.885270 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 3 23:59:00.895665 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 3 23:59:00.936174 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:59:00.936174 initrd-setup-root-after-ignition[1021]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:59:00.924280 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 3 23:59:00.953351 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:59:00.924485 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 3 23:59:00.937783 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:59:00.941239 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 3 23:59:00.949911 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 3 23:59:01.040059 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 3 23:59:01.040251 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 3 23:59:01.047077 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 3 23:59:01.050603 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 3 23:59:01.056639 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 3 23:59:01.058915 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 3 23:59:01.099643 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:59:01.107065 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 3 23:59:01.138999 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:59:01.143069 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:59:01.149527 systemd[1]: Stopped target timers.target - Timer Units. Sep 3 23:59:01.150035 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 3 23:59:01.150310 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:59:01.160718 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 3 23:59:01.167841 systemd[1]: Stopped target basic.target - Basic System. Sep 3 23:59:01.173493 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 3 23:59:01.179729 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:59:01.183842 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 3 23:59:01.190304 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:59:01.195720 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 3 23:59:01.202469 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:59:01.206976 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 3 23:59:01.215537 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 3 23:59:01.221542 systemd[1]: Stopped target swap.target - Swaps. Sep 3 23:59:01.226486 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 3 23:59:01.226733 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:59:01.239496 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:59:01.240356 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:59:01.248701 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 3 23:59:01.249235 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:59:01.254794 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 3 23:59:01.255717 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 3 23:59:01.278577 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 3 23:59:01.279513 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:59:01.283916 systemd[1]: ignition-files.service: Deactivated successfully. Sep 3 23:59:01.284577 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 3 23:59:01.296025 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 3 23:59:01.304547 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 3 23:59:01.304940 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:59:01.319561 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 3 23:59:01.323335 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 3 23:59:01.324352 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:59:01.331577 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 3 23:59:01.331956 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:59:01.352360 ignition[1046]: INFO : Ignition 2.21.0 Sep 3 23:59:01.355446 ignition[1046]: INFO : Stage: umount Sep 3 23:59:01.355446 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:59:01.355446 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 3 23:59:01.353725 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 3 23:59:01.384389 ignition[1046]: INFO : umount: umount passed Sep 3 23:59:01.384389 ignition[1046]: INFO : Ignition finished successfully Sep 3 23:59:01.353926 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 3 23:59:01.362347 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 3 23:59:01.362678 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 3 23:59:01.370339 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 3 23:59:01.370556 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 3 23:59:01.370974 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 3 23:59:01.371037 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 3 23:59:01.379902 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 3 23:59:01.380002 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 3 23:59:01.390542 systemd[1]: Stopped target network.target - Network. Sep 3 23:59:01.399477 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 3 23:59:01.399758 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:59:01.408602 systemd[1]: Stopped target paths.target - Path Units. Sep 3 23:59:01.412498 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 3 23:59:01.417410 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:59:01.418884 systemd[1]: Stopped target slices.target - Slice Units. Sep 3 23:59:01.426482 systemd[1]: Stopped target sockets.target - Socket Units. Sep 3 23:59:01.432619 systemd[1]: iscsid.socket: Deactivated successfully. Sep 3 23:59:01.432686 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:59:01.438679 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 3 23:59:01.438735 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:59:01.445775 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 3 23:59:01.445860 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 3 23:59:01.451577 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 3 23:59:01.451644 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 3 23:59:01.459265 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 3 23:59:01.466045 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 3 23:59:01.480033 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 3 23:59:01.481061 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 3 23:59:01.481268 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 3 23:59:01.490061 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 3 23:59:01.490479 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 3 23:59:01.490655 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 3 23:59:01.499352 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 3 23:59:01.499721 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 3 23:59:01.499845 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 3 23:59:01.508585 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 3 23:59:01.513791 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 3 23:59:01.514276 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:59:01.517572 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 3 23:59:01.518244 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 3 23:59:01.523468 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 3 23:59:01.534407 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 3 23:59:01.534566 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:59:01.544447 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:59:01.544583 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:59:01.554654 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 3 23:59:01.554846 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 3 23:59:01.560512 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 3 23:59:01.560595 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:59:01.569941 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:59:01.582830 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:59:01.582965 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:59:01.590575 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 3 23:59:01.590791 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:59:01.593788 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 3 23:59:01.593859 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 3 23:59:01.597882 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 3 23:59:01.597936 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:59:01.603770 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 3 23:59:01.603858 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:59:01.617055 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 3 23:59:01.617292 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 3 23:59:01.631665 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 3 23:59:01.631756 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:59:01.645641 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 3 23:59:01.654326 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 3 23:59:01.654710 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:59:01.663961 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 3 23:59:01.664082 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:59:01.681837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:59:01.681931 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:59:01.693635 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 3 23:59:01.799173 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 3 23:59:01.693713 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 3 23:59:01.693763 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:59:01.694307 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 3 23:59:01.694430 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 3 23:59:01.699940 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 3 23:59:01.700117 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 3 23:59:01.710617 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 3 23:59:01.714894 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 3 23:59:01.758294 systemd[1]: Switching root. Sep 3 23:59:01.831310 systemd-journald[207]: Journal stopped Sep 3 23:59:04.416416 kernel: SELinux: policy capability network_peer_controls=1 Sep 3 23:59:04.416475 kernel: SELinux: policy capability open_perms=1 Sep 3 23:59:04.416496 kernel: SELinux: policy capability extended_socket_class=1 Sep 3 23:59:04.416514 kernel: SELinux: policy capability always_check_network=0 Sep 3 23:59:04.416532 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 3 23:59:04.416549 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 3 23:59:04.416573 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 3 23:59:04.416592 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 3 23:59:04.416610 kernel: SELinux: policy capability userspace_initial_context=0 Sep 3 23:59:04.416629 kernel: audit: type=1403 audit(1756943942.523:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 3 23:59:04.416650 systemd[1]: Successfully loaded SELinux policy in 63.938ms. Sep 3 23:59:04.416672 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.851ms. Sep 3 23:59:04.416694 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:59:04.416718 systemd[1]: Detected virtualization google. Sep 3 23:59:04.416740 systemd[1]: Detected architecture x86-64. Sep 3 23:59:04.416761 systemd[1]: Detected first boot. Sep 3 23:59:04.416785 systemd[1]: Initializing machine ID from random generator. Sep 3 23:59:04.416805 zram_generator::config[1090]: No configuration found. Sep 3 23:59:04.416831 kernel: Guest personality initialized and is inactive Sep 3 23:59:04.416850 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 3 23:59:04.416869 kernel: Initialized host personality Sep 3 23:59:04.416888 kernel: NET: Registered PF_VSOCK protocol family Sep 3 23:59:04.416908 systemd[1]: Populated /etc with preset unit settings. Sep 3 23:59:04.416930 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 3 23:59:04.416950 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 3 23:59:04.416973 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 3 23:59:04.416994 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 3 23:59:04.417015 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 3 23:59:04.417036 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 3 23:59:04.417058 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 3 23:59:04.417078 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 3 23:59:04.417100 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 3 23:59:04.417124 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 3 23:59:04.417158 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 3 23:59:04.417179 systemd[1]: Created slice user.slice - User and Session Slice. Sep 3 23:59:04.417201 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:59:04.417221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:59:04.417244 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 3 23:59:04.417270 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 3 23:59:04.417293 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 3 23:59:04.417321 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:59:04.417346 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 3 23:59:04.417368 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:59:04.417389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:59:04.417411 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 3 23:59:04.417433 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 3 23:59:04.417454 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 3 23:59:04.417476 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 3 23:59:04.417502 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:59:04.417523 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:59:04.417545 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:59:04.417566 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:59:04.417587 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 3 23:59:04.417609 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 3 23:59:04.417630 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 3 23:59:04.417657 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:59:04.417679 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:59:04.417701 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:59:04.417724 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 3 23:59:04.417746 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 3 23:59:04.417768 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 3 23:59:04.417793 systemd[1]: Mounting media.mount - External Media Directory... Sep 3 23:59:04.417815 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 3 23:59:04.417837 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 3 23:59:04.417859 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 3 23:59:04.417881 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 3 23:59:04.417903 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 3 23:59:04.417925 systemd[1]: Reached target machines.target - Containers. Sep 3 23:59:04.417947 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 3 23:59:04.417973 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:59:04.417995 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:59:04.418017 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 3 23:59:04.418040 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:59:04.418062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:59:04.418084 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:59:04.418107 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 3 23:59:04.418141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:59:04.418168 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 3 23:59:04.418190 kernel: fuse: init (API version 7.41) Sep 3 23:59:04.418212 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 3 23:59:04.418234 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 3 23:59:04.418256 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 3 23:59:04.418284 systemd[1]: Stopped systemd-fsck-usr.service. Sep 3 23:59:04.418306 kernel: ACPI: bus type drm_connector registered Sep 3 23:59:04.418326 kernel: loop: module loaded Sep 3 23:59:04.418348 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:59:04.418374 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:59:04.418397 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:59:04.418419 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:59:04.418441 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 3 23:59:04.418500 systemd-journald[1178]: Collecting audit messages is disabled. Sep 3 23:59:04.418549 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 3 23:59:04.418571 systemd-journald[1178]: Journal started Sep 3 23:59:04.418614 systemd-journald[1178]: Runtime Journal (/run/log/journal/1597e245f232443a8fe290f20e451f91) is 8M, max 148.9M, 140.9M free. Sep 3 23:59:03.587252 systemd[1]: Queued start job for default target multi-user.target. Sep 3 23:59:03.613184 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 3 23:59:03.613797 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 3 23:59:04.449196 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:59:04.470167 systemd[1]: verity-setup.service: Deactivated successfully. Sep 3 23:59:04.481172 systemd[1]: Stopped verity-setup.service. Sep 3 23:59:04.514167 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 3 23:59:04.529190 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:59:04.539942 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 3 23:59:04.549588 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 3 23:59:04.559568 systemd[1]: Mounted media.mount - External Media Directory. Sep 3 23:59:04.570560 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 3 23:59:04.579772 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 3 23:59:04.589531 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 3 23:59:04.598977 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 3 23:59:04.610807 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:59:04.621899 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 3 23:59:04.622467 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 3 23:59:04.634802 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:59:04.635106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:59:04.645804 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:59:04.646100 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:59:04.655702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:59:04.655979 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:59:04.666912 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 3 23:59:04.667256 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 3 23:59:04.676761 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:59:04.677052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:59:04.688950 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:59:04.701395 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:59:04.714113 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 3 23:59:04.725821 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 3 23:59:04.736722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:59:04.765344 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:59:04.779851 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 3 23:59:04.797272 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 3 23:59:04.806430 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 3 23:59:04.806727 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:59:04.817937 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 3 23:59:04.830896 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 3 23:59:04.841528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:59:04.845881 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 3 23:59:04.863933 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 3 23:59:04.875380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:59:04.878243 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 3 23:59:04.888331 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:59:04.890922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:59:04.906965 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 3 23:59:04.913271 systemd-journald[1178]: Time spent on flushing to /var/log/journal/1597e245f232443a8fe290f20e451f91 is 100.082ms for 962 entries. Sep 3 23:59:04.913271 systemd-journald[1178]: System Journal (/var/log/journal/1597e245f232443a8fe290f20e451f91) is 8M, max 584.8M, 576.8M free. Sep 3 23:59:05.053600 systemd-journald[1178]: Received client request to flush runtime journal. Sep 3 23:59:05.053668 kernel: loop0: detected capacity change from 0 to 113872 Sep 3 23:59:04.933994 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 3 23:59:04.948741 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 3 23:59:04.964520 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 3 23:59:04.976809 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 3 23:59:05.002186 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 3 23:59:05.015837 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 3 23:59:05.045419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:59:05.055980 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 3 23:59:05.091166 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 3 23:59:05.128202 kernel: loop1: detected capacity change from 0 to 229808 Sep 3 23:59:05.146208 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 3 23:59:05.148009 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 3 23:59:05.174844 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 3 23:59:05.192949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:59:05.277185 kernel: loop2: detected capacity change from 0 to 146240 Sep 3 23:59:05.282829 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 3 23:59:05.282868 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 3 23:59:05.300601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:59:05.415637 kernel: loop3: detected capacity change from 0 to 52072 Sep 3 23:59:05.492426 kernel: loop4: detected capacity change from 0 to 113872 Sep 3 23:59:05.553788 kernel: loop5: detected capacity change from 0 to 229808 Sep 3 23:59:05.618192 kernel: loop6: detected capacity change from 0 to 146240 Sep 3 23:59:05.694247 kernel: loop7: detected capacity change from 0 to 52072 Sep 3 23:59:05.753401 (sd-merge)[1235]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 3 23:59:05.754437 (sd-merge)[1235]: Merged extensions into '/usr'. Sep 3 23:59:05.773184 systemd[1]: Reload requested from client PID 1213 ('systemd-sysext') (unit systemd-sysext.service)... Sep 3 23:59:05.773210 systemd[1]: Reloading... Sep 3 23:59:05.969490 zram_generator::config[1257]: No configuration found. Sep 3 23:59:06.241712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:59:06.290111 ldconfig[1208]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 3 23:59:06.464688 systemd[1]: Reloading finished in 690 ms. Sep 3 23:59:06.485531 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 3 23:59:06.495930 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 3 23:59:06.522394 systemd[1]: Starting ensure-sysext.service... Sep 3 23:59:06.532389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:59:06.585170 systemd[1]: Reload requested from client PID 1301 ('systemctl') (unit ensure-sysext.service)... Sep 3 23:59:06.585200 systemd[1]: Reloading... Sep 3 23:59:06.608078 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 3 23:59:06.608695 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 3 23:59:06.609341 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 3 23:59:06.610020 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 3 23:59:06.612021 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 3 23:59:06.612809 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Sep 3 23:59:06.613053 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Sep 3 23:59:06.625946 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:59:06.625967 systemd-tmpfiles[1302]: Skipping /boot Sep 3 23:59:06.675557 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:59:06.677225 systemd-tmpfiles[1302]: Skipping /boot Sep 3 23:59:06.758253 zram_generator::config[1332]: No configuration found. Sep 3 23:59:06.879917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:59:07.005849 systemd[1]: Reloading finished in 419 ms. Sep 3 23:59:07.027871 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 3 23:59:07.057002 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:59:07.080464 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:59:07.098295 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 3 23:59:07.120281 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 3 23:59:07.139164 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:59:07.153523 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:59:07.174667 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 3 23:59:07.193793 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 3 23:59:07.194176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:59:07.203645 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:59:07.223523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:59:07.229284 augenrules[1398]: No rules Sep 3 23:59:07.237416 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:59:07.247577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:59:07.247870 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:59:07.260042 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 3 23:59:07.273319 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 3 23:59:07.277346 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:59:07.279116 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:59:07.294661 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 3 23:59:07.299355 systemd-udevd[1390]: Using default interface naming scheme 'v255'. Sep 3 23:59:07.307703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:59:07.308437 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:59:07.320348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:59:07.321808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:59:07.333340 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:59:07.333660 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:59:07.376635 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 3 23:59:07.390153 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 3 23:59:07.404535 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:59:07.431590 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 3 23:59:07.473267 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 3 23:59:07.475481 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:59:07.484556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:59:07.488955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:59:07.507520 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:59:07.523438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:59:07.539438 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:59:07.553409 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 3 23:59:07.561414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:59:07.561488 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:59:07.570437 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:59:07.580333 systemd[1]: Reached target time-set.target - System Time Set. Sep 3 23:59:07.594849 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 3 23:59:07.605365 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:59:07.605444 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 3 23:59:07.609182 systemd[1]: Finished ensure-sysext.service. Sep 3 23:59:07.610776 augenrules[1441]: /sbin/augenrules: No change Sep 3 23:59:07.618988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:59:07.619321 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:59:07.630872 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:59:07.632398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:59:07.656618 augenrules[1472]: No rules Sep 3 23:59:07.666151 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:59:07.666522 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:59:07.677911 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:59:07.678233 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:59:07.687752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:59:07.688752 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:59:07.701239 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 3 23:59:07.711866 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 3 23:59:07.736513 systemd-resolved[1384]: Positive Trust Anchors: Sep 3 23:59:07.737828 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:59:07.737907 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:59:07.755486 systemd-resolved[1384]: Defaulting to hostname 'linux'. Sep 3 23:59:07.757344 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 3 23:59:07.770300 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:59:07.770430 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:59:07.770784 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:59:07.794964 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Sep 3 23:59:07.797973 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:59:07.810353 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:59:07.820443 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 3 23:59:07.831350 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 3 23:59:07.841293 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 3 23:59:07.852571 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 3 23:59:07.863566 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 3 23:59:07.875335 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 3 23:59:07.886378 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 3 23:59:07.886980 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:59:07.894332 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:59:07.902303 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 3 23:59:07.917240 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 3 23:59:07.931569 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 3 23:59:07.948112 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 3 23:59:07.958694 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 3 23:59:07.972330 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 3 23:59:07.982786 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 3 23:59:07.997374 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 3 23:59:08.012725 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 3 23:59:08.035682 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 3 23:59:08.051429 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 3 23:59:08.054009 systemd-networkd[1457]: lo: Link UP Sep 3 23:59:08.054023 systemd-networkd[1457]: lo: Gained carrier Sep 3 23:59:08.057778 systemd-networkd[1457]: Enumeration completed Sep 3 23:59:08.060312 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:59:08.070324 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:59:08.078616 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:59:08.078702 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:59:08.081852 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 3 23:59:08.083082 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:59:08.083293 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:59:08.085977 systemd-networkd[1457]: eth0: Link UP Sep 3 23:59:08.086238 systemd-networkd[1457]: eth0: Gained carrier Sep 3 23:59:08.086271 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:59:08.093592 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 3 23:59:08.099280 systemd-networkd[1457]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436.c.flatcar-212911.internal' to 'ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436' Sep 3 23:59:08.099304 systemd-networkd[1457]: eth0: DHCPv4 address 10.128.0.18/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 3 23:59:08.107522 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 3 23:59:08.130601 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 3 23:59:08.159065 kernel: mousedev: PS/2 mouse device common for all mice Sep 3 23:59:08.159533 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 3 23:59:08.170291 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 3 23:59:08.175454 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 3 23:59:08.192084 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 3 23:59:08.212765 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 3 23:59:08.216029 systemd[1]: Started ntpd.service - Network Time Service. Sep 3 23:59:08.219535 jq[1511]: false Sep 3 23:59:08.237165 kernel: ACPI: button: Power Button [PWRF] Sep 3 23:59:08.237251 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 3 23:59:08.247818 kernel: ACPI: button: Sleep Button [SLPF] Sep 3 23:59:08.245845 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 3 23:59:08.260535 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 3 23:59:08.293388 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 3 23:59:08.302496 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Refreshing passwd entry cache Sep 3 23:59:08.307402 oslogin_cache_refresh[1516]: Refreshing passwd entry cache Sep 3 23:59:08.320201 extend-filesystems[1513]: Found /dev/sda6 Sep 3 23:59:08.315556 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 3 23:59:08.350372 coreos-metadata[1508]: Sep 03 23:59:08.337 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 3 23:59:08.326908 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 3 23:59:08.329119 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 3 23:59:08.332220 systemd[1]: Starting update-engine.service - Update Engine... Sep 3 23:59:08.344306 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 3 23:59:08.353856 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:59:08.358196 coreos-metadata[1508]: Sep 03 23:59:08.356 INFO Fetch successful Sep 3 23:59:08.358196 coreos-metadata[1508]: Sep 03 23:59:08.356 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 3 23:59:08.358327 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Failure getting users, quitting Sep 3 23:59:08.358327 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 3 23:59:08.354042 oslogin_cache_refresh[1516]: Failure getting users, quitting Sep 3 23:59:08.354087 oslogin_cache_refresh[1516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 3 23:59:08.363174 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Refreshing group entry cache Sep 3 23:59:08.363311 coreos-metadata[1508]: Sep 03 23:59:08.362 INFO Fetch successful Sep 3 23:59:08.363311 coreos-metadata[1508]: Sep 03 23:59:08.362 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 3 23:59:08.363311 coreos-metadata[1508]: Sep 03 23:59:08.362 INFO Fetch successful Sep 3 23:59:08.363311 coreos-metadata[1508]: Sep 03 23:59:08.363 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 3 23:59:08.361251 oslogin_cache_refresh[1516]: Refreshing group entry cache Sep 3 23:59:08.368201 coreos-metadata[1508]: Sep 03 23:59:08.364 INFO Fetch successful Sep 3 23:59:08.374243 extend-filesystems[1513]: Found /dev/sda9 Sep 3 23:59:08.369435 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 3 23:59:08.402702 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Failure getting groups, quitting Sep 3 23:59:08.402702 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 3 23:59:08.376685 oslogin_cache_refresh[1516]: Failure getting groups, quitting Sep 3 23:59:08.397724 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 3 23:59:08.376710 oslogin_cache_refresh[1516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 3 23:59:08.398623 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 3 23:59:08.399995 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 3 23:59:08.400671 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 3 23:59:08.412801 extend-filesystems[1513]: Checking size of /dev/sda9 Sep 3 23:59:08.423814 systemd[1]: motdgen.service: Deactivated successfully. Sep 3 23:59:08.425335 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 3 23:59:08.438037 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 3 23:59:08.439472 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 3 23:59:08.441310 jq[1535]: true Sep 3 23:59:08.477196 extend-filesystems[1513]: Resized partition /dev/sda9 Sep 3 23:59:08.506220 extend-filesystems[1553]: resize2fs 1.47.2 (1-Jan-2025) Sep 3 23:59:08.518848 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 3 23:59:08.518952 update_engine[1533]: I20250903 23:59:08.496570 1533 main.cc:92] Flatcar Update Engine starting Sep 3 23:59:08.519851 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 3 23:59:08.567537 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 3 23:59:08.580148 jq[1543]: true Sep 3 23:59:08.601946 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:33:36 UTC 2025 (1): Starting Sep 3 23:59:08.601946 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 3 23:59:08.601946 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: ---------------------------------------------------- Sep 3 23:59:08.601946 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: ntp-4 is maintained by Network Time Foundation, Sep 3 23:59:08.601946 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 3 23:59:08.601946 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: corporation. Support and training for ntp-4 are Sep 3 23:59:08.601946 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: available at https://www.nwtime.org/support Sep 3 23:59:08.601946 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: ---------------------------------------------------- Sep 3 23:59:08.601946 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: proto: precision = 0.071 usec (-24) Sep 3 23:59:08.583152 ntpd[1518]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:33:36 UTC 2025 (1): Starting Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: basedate set to 2025-08-22 Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: gps base set to 2025-08-24 (week 2381) Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: Listen and drop on 0 v6wildcard [::]:123 Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: Listen normally on 2 lo 127.0.0.1:123 Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: Listen normally on 3 eth0 10.128.0.18:123 Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: Listen normally on 4 lo [::1]:123 Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: bind(21) AF_INET6 fe80::4001:aff:fe80:12%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:12%2#123 Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: failed to init interface for address fe80::4001:aff:fe80:12%2 Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: Listening on routing socket on fd #21 for interface updates Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:59:08.656449 ntpd[1518]: 3 Sep 23:59:08 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:59:08.657534 extend-filesystems[1553]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 3 23:59:08.657534 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 3 23:59:08.657534 extend-filesystems[1553]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 3 23:59:08.583211 ntpd[1518]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 3 23:59:08.625236 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 3 23:59:08.689584 extend-filesystems[1513]: Resized filesystem in /dev/sda9 Sep 3 23:59:08.583227 ntpd[1518]: ---------------------------------------------------- Sep 3 23:59:08.625588 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 3 23:59:08.583242 ntpd[1518]: ntp-4 is maintained by Network Time Foundation, Sep 3 23:59:08.583255 ntpd[1518]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 3 23:59:08.583269 ntpd[1518]: corporation. Support and training for ntp-4 are Sep 3 23:59:08.583284 ntpd[1518]: available at https://www.nwtime.org/support Sep 3 23:59:08.583298 ntpd[1518]: ---------------------------------------------------- Sep 3 23:59:08.593998 ntpd[1518]: proto: precision = 0.071 usec (-24) Sep 3 23:59:08.600818 ntpd[1518]: basedate set to 2025-08-22 Sep 3 23:59:08.602265 ntpd[1518]: gps base set to 2025-08-24 (week 2381) Sep 3 23:59:08.616793 ntpd[1518]: Listen and drop on 0 v6wildcard [::]:123 Sep 3 23:59:08.616866 ntpd[1518]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 3 23:59:08.617260 ntpd[1518]: Listen normally on 2 lo 127.0.0.1:123 Sep 3 23:59:08.619218 ntpd[1518]: Listen normally on 3 eth0 10.128.0.18:123 Sep 3 23:59:08.619306 ntpd[1518]: Listen normally on 4 lo [::1]:123 Sep 3 23:59:08.619386 ntpd[1518]: bind(21) AF_INET6 fe80::4001:aff:fe80:12%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:59:08.619417 ntpd[1518]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:12%2#123 Sep 3 23:59:08.619470 ntpd[1518]: failed to init interface for address fe80::4001:aff:fe80:12%2 Sep 3 23:59:08.619527 ntpd[1518]: Listening on routing socket on fd #21 for interface updates Sep 3 23:59:08.629359 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:59:08.629399 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:59:08.710160 kernel: EDAC MC: Ver: 3.0.0 Sep 3 23:59:08.716013 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 3 23:59:08.752946 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 3 23:59:08.763937 systemd[1]: Reached target network.target - Network. Sep 3 23:59:08.775102 systemd[1]: Starting containerd.service - containerd container runtime... Sep 3 23:59:08.785413 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 3 23:59:08.789666 tar[1539]: linux-amd64/LICENSE Sep 3 23:59:08.790231 tar[1539]: linux-amd64/helm Sep 3 23:59:08.791455 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 3 23:59:08.806360 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 3 23:59:08.824225 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 3 23:59:08.894447 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 3 23:59:08.900252 bash[1599]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:59:08.918814 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 3 23:59:08.919616 (ntainerd)[1603]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 3 23:59:08.941337 systemd[1]: Starting sshkeys.service... Sep 3 23:59:08.958927 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:59:08.982387 dbus-daemon[1509]: [system] SELinux support is enabled Sep 3 23:59:08.986812 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 3 23:59:08.998601 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 3 23:59:08.999998 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 3 23:59:09.000207 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 3 23:59:09.000236 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 3 23:59:09.041891 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1457 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 3 23:59:09.050661 update_engine[1533]: I20250903 23:59:09.050393 1533 update_check_scheduler.cc:74] Next update check in 8m35s Sep 3 23:59:09.064499 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 3 23:59:09.075928 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 3 23:59:09.103522 systemd[1]: Started update-engine.service - Update Engine. Sep 3 23:59:09.124602 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 3 23:59:09.146875 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 3 23:59:09.197494 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 3 23:59:09.218120 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 3 23:59:09.412439 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 3 23:59:09.457497 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 3 23:59:09.510304 coreos-metadata[1612]: Sep 03 23:59:09.508 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 3 23:59:09.510972 systemd-logind[1531]: Watching system buttons on /dev/input/event2 (Power Button) Sep 3 23:59:09.511906 coreos-metadata[1612]: Sep 03 23:59:09.511 INFO Fetch failed with 404: resource not found Sep 3 23:59:09.511906 coreos-metadata[1612]: Sep 03 23:59:09.511 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 3 23:59:09.511016 systemd-logind[1531]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 3 23:59:09.511051 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 3 23:59:09.512464 systemd-logind[1531]: New seat seat0. Sep 3 23:59:09.515701 coreos-metadata[1612]: Sep 03 23:59:09.515 INFO Fetch successful Sep 3 23:59:09.515701 coreos-metadata[1612]: Sep 03 23:59:09.515 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 3 23:59:09.517024 systemd[1]: Started systemd-logind.service - User Login Management. Sep 3 23:59:09.521403 coreos-metadata[1612]: Sep 03 23:59:09.520 INFO Fetch failed with 404: resource not found Sep 3 23:59:09.521403 coreos-metadata[1612]: Sep 03 23:59:09.520 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 3 23:59:09.524024 coreos-metadata[1612]: Sep 03 23:59:09.521 INFO Fetch failed with 404: resource not found Sep 3 23:59:09.524024 coreos-metadata[1612]: Sep 03 23:59:09.521 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 3 23:59:09.528983 coreos-metadata[1612]: Sep 03 23:59:09.525 INFO Fetch successful Sep 3 23:59:09.536311 unknown[1612]: wrote ssh authorized keys file for user: core Sep 3 23:59:09.584896 ntpd[1518]: bind(24) AF_INET6 fe80::4001:aff:fe80:12%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:59:09.587599 ntpd[1518]: 3 Sep 23:59:09 ntpd[1518]: bind(24) AF_INET6 fe80::4001:aff:fe80:12%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:59:09.587599 ntpd[1518]: 3 Sep 23:59:09 ntpd[1518]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:12%2#123 Sep 3 23:59:09.587599 ntpd[1518]: 3 Sep 23:59:09 ntpd[1518]: failed to init interface for address fe80::4001:aff:fe80:12%2 Sep 3 23:59:09.586211 ntpd[1518]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:12%2#123 Sep 3 23:59:09.586234 ntpd[1518]: failed to init interface for address fe80::4001:aff:fe80:12%2 Sep 3 23:59:09.614700 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 3 23:59:09.627379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:59:09.640849 update-ssh-keys[1637]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:59:09.644353 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 3 23:59:09.655675 systemd-networkd[1457]: eth0: Gained IPv6LL Sep 3 23:59:09.660122 systemd[1]: Started sshd@0-10.128.0.18:22-147.75.109.163:50162.service - OpenSSH per-connection server daemon (147.75.109.163:50162). Sep 3 23:59:09.676516 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 3 23:59:09.695553 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 3 23:59:09.709352 systemd[1]: Finished sshkeys.service. Sep 3 23:59:09.723212 systemd[1]: Reached target network-online.target - Network is Online. Sep 3 23:59:09.737609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:59:09.758289 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 3 23:59:09.779625 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 3 23:59:09.847618 init.sh[1651]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 3 23:59:09.847618 init.sh[1651]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 3 23:59:09.847618 init.sh[1651]: + /usr/bin/google_instance_setup Sep 3 23:59:09.873296 systemd[1]: issuegen.service: Deactivated successfully. Sep 3 23:59:09.873639 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 3 23:59:09.934623 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 3 23:59:09.991569 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 3 23:59:10.022709 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 3 23:59:10.047939 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1608 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 3 23:59:10.066620 systemd[1]: Starting polkit.service - Authorization Manager... Sep 3 23:59:10.110270 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 3 23:59:10.132776 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 3 23:59:10.151529 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 3 23:59:10.161699 systemd[1]: Reached target getty.target - Login Prompts. Sep 3 23:59:10.171218 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 3 23:59:10.327241 containerd[1603]: time="2025-09-03T23:59:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 3 23:59:10.335722 containerd[1603]: time="2025-09-03T23:59:10.332817073Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 3 23:59:10.371312 containerd[1603]: time="2025-09-03T23:59:10.371243079Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.472µs" Sep 3 23:59:10.371312 containerd[1603]: time="2025-09-03T23:59:10.371304189Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 3 23:59:10.371514 containerd[1603]: time="2025-09-03T23:59:10.371333119Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 3 23:59:10.371612 containerd[1603]: time="2025-09-03T23:59:10.371564167Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 3 23:59:10.371716 containerd[1603]: time="2025-09-03T23:59:10.371610752Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 3 23:59:10.371716 containerd[1603]: time="2025-09-03T23:59:10.371668070Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:59:10.371814 containerd[1603]: time="2025-09-03T23:59:10.371779413Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:59:10.371814 containerd[1603]: time="2025-09-03T23:59:10.371801185Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:59:10.374965 containerd[1603]: time="2025-09-03T23:59:10.372453123Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:59:10.374965 containerd[1603]: time="2025-09-03T23:59:10.372523550Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:59:10.374965 containerd[1603]: time="2025-09-03T23:59:10.372556527Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:59:10.374965 containerd[1603]: time="2025-09-03T23:59:10.372573899Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 3 23:59:10.374965 containerd[1603]: time="2025-09-03T23:59:10.372968622Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 3 23:59:10.374965 containerd[1603]: time="2025-09-03T23:59:10.373802553Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:59:10.374965 containerd[1603]: time="2025-09-03T23:59:10.374034961Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:59:10.374965 containerd[1603]: time="2025-09-03T23:59:10.374104265Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 3 23:59:10.374965 containerd[1603]: time="2025-09-03T23:59:10.374212792Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 3 23:59:10.375594 containerd[1603]: time="2025-09-03T23:59:10.375035549Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 3 23:59:10.375594 containerd[1603]: time="2025-09-03T23:59:10.375317361Z" level=info msg="metadata content store policy set" policy=shared Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398246736Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398344676Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398371345Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398405442Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398427701Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398445990Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398471454Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398506451Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398528067Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398545211Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398562634Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398586029Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398765730Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 3 23:59:10.399246 containerd[1603]: time="2025-09-03T23:59:10.398797438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 3 23:59:10.399942 containerd[1603]: time="2025-09-03T23:59:10.398823390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 3 23:59:10.399942 containerd[1603]: time="2025-09-03T23:59:10.398859633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 3 23:59:10.399942 containerd[1603]: time="2025-09-03T23:59:10.398890654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 3 23:59:10.399942 containerd[1603]: time="2025-09-03T23:59:10.398913473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 3 23:59:10.399942 containerd[1603]: time="2025-09-03T23:59:10.398932641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 3 23:59:10.399942 containerd[1603]: time="2025-09-03T23:59:10.398949831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 3 23:59:10.399942 containerd[1603]: time="2025-09-03T23:59:10.398968849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 3 23:59:10.399942 containerd[1603]: time="2025-09-03T23:59:10.398986430Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 3 23:59:10.399942 containerd[1603]: time="2025-09-03T23:59:10.399006178Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 3 23:59:10.399942 containerd[1603]: time="2025-09-03T23:59:10.399105168Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 3 23:59:10.406354 containerd[1603]: time="2025-09-03T23:59:10.401286681Z" level=info msg="Start snapshots syncer" Sep 3 23:59:10.406354 containerd[1603]: time="2025-09-03T23:59:10.401389401Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 3 23:59:10.411339 containerd[1603]: time="2025-09-03T23:59:10.408584448Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 3 23:59:10.411339 containerd[1603]: time="2025-09-03T23:59:10.408706750Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 3 23:59:10.411617 containerd[1603]: time="2025-09-03T23:59:10.408955961Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 3 23:59:10.411617 containerd[1603]: time="2025-09-03T23:59:10.410299712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 3 23:59:10.411617 containerd[1603]: time="2025-09-03T23:59:10.410366438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 3 23:59:10.411617 containerd[1603]: time="2025-09-03T23:59:10.410398244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 3 23:59:10.411617 containerd[1603]: time="2025-09-03T23:59:10.411209258Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 3 23:59:10.411617 containerd[1603]: time="2025-09-03T23:59:10.411261268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 3 23:59:10.411617 containerd[1603]: time="2025-09-03T23:59:10.411282827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 3 23:59:10.411617 containerd[1603]: time="2025-09-03T23:59:10.411303700Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416072680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416122010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416160755Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416212455Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416238329Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416254884Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416270783Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416285371Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416301570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416321471Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416348716Z" level=info msg="runtime interface created" Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416358579Z" level=info msg="created NRI interface" Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416373085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416396396Z" level=info msg="Connect containerd service" Sep 3 23:59:10.419444 containerd[1603]: time="2025-09-03T23:59:10.416441949Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 3 23:59:10.432597 containerd[1603]: time="2025-09-03T23:59:10.428636278Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:59:10.520026 sshd[1643]: Accepted publickey for core from 147.75.109.163 port 50162 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 3 23:59:10.527729 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:59:10.541915 polkitd[1663]: Started polkitd version 126 Sep 3 23:59:10.558239 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 3 23:59:10.572546 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 3 23:59:10.645252 systemd-logind[1531]: New session 1 of user core. Sep 3 23:59:10.648621 polkitd[1663]: Loading rules from directory /etc/polkit-1/rules.d Sep 3 23:59:10.649347 polkitd[1663]: Loading rules from directory /run/polkit-1/rules.d Sep 3 23:59:10.649412 polkitd[1663]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 3 23:59:10.649984 polkitd[1663]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 3 23:59:10.650020 polkitd[1663]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 3 23:59:10.650083 polkitd[1663]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 3 23:59:10.660638 polkitd[1663]: Finished loading, compiling and executing 2 rules Sep 3 23:59:10.664740 systemd[1]: Started polkit.service - Authorization Manager. Sep 3 23:59:10.674452 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 3 23:59:10.677803 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 3 23:59:10.678389 polkitd[1663]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 3 23:59:10.694609 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 3 23:59:10.754455 (systemd)[1688]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 3 23:59:10.768724 systemd-logind[1531]: New session c1 of user core. Sep 3 23:59:10.803891 systemd-hostnamed[1608]: Hostname set to (transient) Sep 3 23:59:10.809353 systemd-resolved[1384]: System hostname changed to 'ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436'. Sep 3 23:59:11.132085 containerd[1603]: time="2025-09-03T23:59:11.130681456Z" level=info msg="Start subscribing containerd event" Sep 3 23:59:11.132085 containerd[1603]: time="2025-09-03T23:59:11.130766546Z" level=info msg="Start recovering state" Sep 3 23:59:11.132085 containerd[1603]: time="2025-09-03T23:59:11.130924764Z" level=info msg="Start event monitor" Sep 3 23:59:11.132085 containerd[1603]: time="2025-09-03T23:59:11.130946332Z" level=info msg="Start cni network conf syncer for default" Sep 3 23:59:11.132085 containerd[1603]: time="2025-09-03T23:59:11.130958177Z" level=info msg="Start streaming server" Sep 3 23:59:11.132085 containerd[1603]: time="2025-09-03T23:59:11.130982802Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 3 23:59:11.132085 containerd[1603]: time="2025-09-03T23:59:11.130997508Z" level=info msg="runtime interface starting up..." Sep 3 23:59:11.132085 containerd[1603]: time="2025-09-03T23:59:11.131010116Z" level=info msg="starting plugins..." Sep 3 23:59:11.132085 containerd[1603]: time="2025-09-03T23:59:11.131038222Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 3 23:59:11.135776 containerd[1603]: time="2025-09-03T23:59:11.133997934Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 3 23:59:11.136960 containerd[1603]: time="2025-09-03T23:59:11.136911676Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 3 23:59:11.142679 containerd[1603]: time="2025-09-03T23:59:11.142269828Z" level=info msg="containerd successfully booted in 0.818637s" Sep 3 23:59:11.142411 systemd[1]: Started containerd.service - containerd container runtime. Sep 3 23:59:11.259617 tar[1539]: linux-amd64/README.md Sep 3 23:59:11.264601 systemd[1688]: Queued start job for default target default.target. Sep 3 23:59:11.273014 systemd[1688]: Created slice app.slice - User Application Slice. Sep 3 23:59:11.273070 systemd[1688]: Reached target paths.target - Paths. Sep 3 23:59:11.273172 systemd[1688]: Reached target timers.target - Timers. Sep 3 23:59:11.277588 systemd[1688]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 3 23:59:11.307336 systemd[1688]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 3 23:59:11.308739 systemd[1688]: Reached target sockets.target - Sockets. Sep 3 23:59:11.308823 systemd[1688]: Reached target basic.target - Basic System. Sep 3 23:59:11.308908 systemd[1688]: Reached target default.target - Main User Target. Sep 3 23:59:11.308967 systemd[1688]: Startup finished in 508ms. Sep 3 23:59:11.310548 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 3 23:59:11.321803 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 3 23:59:11.346061 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 3 23:59:11.596702 systemd[1]: Started sshd@1-10.128.0.18:22-147.75.109.163:41154.service - OpenSSH per-connection server daemon (147.75.109.163:41154). Sep 3 23:59:11.633732 instance-setup[1655]: INFO Running google_set_multiqueue. Sep 3 23:59:11.676215 instance-setup[1655]: INFO Set channels for eth0 to 2. Sep 3 23:59:11.682919 instance-setup[1655]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 3 23:59:11.685493 instance-setup[1655]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 3 23:59:11.685780 instance-setup[1655]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 3 23:59:11.688180 instance-setup[1655]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 3 23:59:11.688260 instance-setup[1655]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 3 23:59:11.689854 instance-setup[1655]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 3 23:59:11.690436 instance-setup[1655]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 3 23:59:11.693516 instance-setup[1655]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 3 23:59:11.705793 instance-setup[1655]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 3 23:59:11.712355 instance-setup[1655]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 3 23:59:11.716496 instance-setup[1655]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 3 23:59:11.716556 instance-setup[1655]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 3 23:59:11.753423 init.sh[1651]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 3 23:59:11.959219 startup-script[1742]: INFO Starting startup scripts. Sep 3 23:59:11.965575 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 41154 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 3 23:59:11.967189 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:59:11.968190 startup-script[1742]: INFO No startup scripts found in metadata. Sep 3 23:59:11.968279 startup-script[1742]: INFO Finished running startup scripts. Sep 3 23:59:11.984663 systemd-logind[1531]: New session 2 of user core. Sep 3 23:59:11.990366 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 3 23:59:11.996913 init.sh[1651]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 3 23:59:11.996913 init.sh[1651]: + daemon_pids=() Sep 3 23:59:11.996913 init.sh[1651]: + for d in accounts clock_skew network Sep 3 23:59:11.997635 init.sh[1651]: + daemon_pids+=($!) Sep 3 23:59:11.997635 init.sh[1651]: + for d in accounts clock_skew network Sep 3 23:59:11.997781 init.sh[1745]: + /usr/bin/google_accounts_daemon Sep 3 23:59:11.998244 init.sh[1651]: + daemon_pids+=($!) Sep 3 23:59:11.998244 init.sh[1651]: + for d in accounts clock_skew network Sep 3 23:59:11.999879 init.sh[1746]: + /usr/bin/google_clock_skew_daemon Sep 3 23:59:12.000228 init.sh[1651]: + daemon_pids+=($!) Sep 3 23:59:12.000228 init.sh[1651]: + NOTIFY_SOCKET=/run/systemd/notify Sep 3 23:59:12.000228 init.sh[1651]: + /usr/bin/systemd-notify --ready Sep 3 23:59:12.000407 init.sh[1747]: + /usr/bin/google_network_daemon Sep 3 23:59:12.028388 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 3 23:59:12.042867 init.sh[1651]: + wait -n 1745 1746 1747 Sep 3 23:59:12.212951 sshd[1749]: Connection closed by 147.75.109.163 port 41154 Sep 3 23:59:12.211496 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Sep 3 23:59:12.223464 systemd[1]: sshd@1-10.128.0.18:22-147.75.109.163:41154.service: Deactivated successfully. Sep 3 23:59:12.229786 systemd[1]: session-2.scope: Deactivated successfully. Sep 3 23:59:12.236704 systemd-logind[1531]: Session 2 logged out. Waiting for processes to exit. Sep 3 23:59:12.240854 systemd-logind[1531]: Removed session 2. Sep 3 23:59:12.275369 systemd[1]: Started sshd@2-10.128.0.18:22-147.75.109.163:41162.service - OpenSSH per-connection server daemon (147.75.109.163:41162). Sep 3 23:59:12.433033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:59:12.443960 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 3 23:59:12.452282 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:59:12.454832 systemd[1]: Startup finished in 4.559s (kernel) + 9.638s (initrd) + 9.991s (userspace) = 24.189s. Sep 3 23:59:12.583875 ntpd[1518]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:12%2]:123 Sep 3 23:59:12.585097 ntpd[1518]: 3 Sep 23:59:12 ntpd[1518]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:12%2]:123 Sep 3 23:59:12.593581 google-clock-skew[1746]: INFO Starting Google Clock Skew daemon. Sep 3 23:59:12.605268 google-networking[1747]: INFO Starting Google Networking daemon. Sep 3 23:59:12.613766 google-clock-skew[1746]: INFO Clock drift token has changed: 0. Sep 3 23:59:12.664178 sshd[1755]: Accepted publickey for core from 147.75.109.163 port 41162 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 3 23:59:12.670436 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:59:12.690213 systemd-logind[1531]: New session 3 of user core. Sep 3 23:59:12.693472 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 3 23:59:12.734978 groupadd[1775]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 3 23:59:12.743897 groupadd[1775]: group added to /etc/gshadow: name=google-sudoers Sep 3 23:59:12.824382 groupadd[1775]: new group: name=google-sudoers, GID=1000 Sep 3 23:59:12.873983 google-accounts[1745]: INFO Starting Google Accounts daemon. Sep 3 23:59:12.895237 google-accounts[1745]: WARNING OS Login not installed. Sep 3 23:59:12.898548 sshd[1780]: Connection closed by 147.75.109.163 port 41162 Sep 3 23:59:12.898480 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Sep 3 23:59:12.901123 google-accounts[1745]: INFO Creating a new user account for 0. Sep 3 23:59:12.908654 systemd[1]: sshd@2-10.128.0.18:22-147.75.109.163:41162.service: Deactivated successfully. Sep 3 23:59:12.911356 init.sh[1789]: useradd: invalid user name '0': use --badname to ignore Sep 3 23:59:12.911791 google-accounts[1745]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 3 23:59:12.913921 systemd[1]: session-3.scope: Deactivated successfully. Sep 3 23:59:12.918734 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. Sep 3 23:59:12.930689 systemd-logind[1531]: Removed session 3. Sep 3 23:59:13.001186 systemd-resolved[1384]: Clock change detected. Flushing caches. Sep 3 23:59:13.001685 google-clock-skew[1746]: INFO Synced system time with hardware clock. Sep 3 23:59:13.356693 kubelet[1768]: E0903 23:59:13.356481 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:59:13.360394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:59:13.360662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:59:13.361308 systemd[1]: kubelet.service: Consumed 1.475s CPU time, 268M memory peak. Sep 3 23:59:22.822995 systemd[1]: Started sshd@3-10.128.0.18:22-147.75.109.163:50230.service - OpenSSH per-connection server daemon (147.75.109.163:50230). Sep 3 23:59:23.133456 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 50230 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 3 23:59:23.136381 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:59:23.146990 systemd-logind[1531]: New session 4 of user core. Sep 3 23:59:23.153468 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 3 23:59:23.348332 sshd[1799]: Connection closed by 147.75.109.163 port 50230 Sep 3 23:59:23.349589 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Sep 3 23:59:23.356806 systemd[1]: sshd@3-10.128.0.18:22-147.75.109.163:50230.service: Deactivated successfully. Sep 3 23:59:23.359888 systemd[1]: session-4.scope: Deactivated successfully. Sep 3 23:59:23.361258 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. Sep 3 23:59:23.363601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 3 23:59:23.366407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:59:23.367556 systemd-logind[1531]: Removed session 4. Sep 3 23:59:23.403322 systemd[1]: Started sshd@4-10.128.0.18:22-147.75.109.163:50244.service - OpenSSH per-connection server daemon (147.75.109.163:50244). Sep 3 23:59:23.731522 sshd[1808]: Accepted publickey for core from 147.75.109.163 port 50244 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 3 23:59:23.733571 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:59:23.744838 systemd-logind[1531]: New session 5 of user core. Sep 3 23:59:23.749695 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 3 23:59:23.753243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:59:23.769980 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:59:23.843164 kubelet[1814]: E0903 23:59:23.843083 1814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:59:23.848164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:59:23.848446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:59:23.849638 systemd[1]: kubelet.service: Consumed 258ms CPU time, 108.8M memory peak. Sep 3 23:59:23.942257 sshd[1816]: Connection closed by 147.75.109.163 port 50244 Sep 3 23:59:23.944385 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Sep 3 23:59:23.950555 systemd[1]: sshd@4-10.128.0.18:22-147.75.109.163:50244.service: Deactivated successfully. Sep 3 23:59:23.953330 systemd[1]: session-5.scope: Deactivated successfully. Sep 3 23:59:23.955038 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. Sep 3 23:59:23.957990 systemd-logind[1531]: Removed session 5. Sep 3 23:59:24.000581 systemd[1]: Started sshd@5-10.128.0.18:22-147.75.109.163:50248.service - OpenSSH per-connection server daemon (147.75.109.163:50248). Sep 3 23:59:24.310872 sshd[1829]: Accepted publickey for core from 147.75.109.163 port 50248 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 3 23:59:24.312925 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:59:24.322106 systemd-logind[1531]: New session 6 of user core. Sep 3 23:59:24.328454 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 3 23:59:24.527838 sshd[1831]: Connection closed by 147.75.109.163 port 50248 Sep 3 23:59:24.528830 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Sep 3 23:59:24.534829 systemd[1]: sshd@5-10.128.0.18:22-147.75.109.163:50248.service: Deactivated successfully. Sep 3 23:59:24.537481 systemd[1]: session-6.scope: Deactivated successfully. Sep 3 23:59:24.539365 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. Sep 3 23:59:24.541887 systemd-logind[1531]: Removed session 6. Sep 3 23:59:24.595338 systemd[1]: Started sshd@6-10.128.0.18:22-147.75.109.163:50264.service - OpenSSH per-connection server daemon (147.75.109.163:50264). Sep 3 23:59:24.911932 sshd[1837]: Accepted publickey for core from 147.75.109.163 port 50264 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 3 23:59:24.914532 sshd-session[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:59:24.923629 systemd-logind[1531]: New session 7 of user core. Sep 3 23:59:24.930474 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 3 23:59:25.111790 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 3 23:59:25.112311 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:59:25.130326 sudo[1840]: pam_unix(sudo:session): session closed for user root Sep 3 23:59:25.174705 sshd[1839]: Connection closed by 147.75.109.163 port 50264 Sep 3 23:59:25.176424 sshd-session[1837]: pam_unix(sshd:session): session closed for user core Sep 3 23:59:25.181628 systemd[1]: sshd@6-10.128.0.18:22-147.75.109.163:50264.service: Deactivated successfully. Sep 3 23:59:25.184415 systemd[1]: session-7.scope: Deactivated successfully. Sep 3 23:59:25.188264 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. Sep 3 23:59:25.190607 systemd-logind[1531]: Removed session 7. Sep 3 23:59:25.233896 systemd[1]: Started sshd@7-10.128.0.18:22-147.75.109.163:50274.service - OpenSSH per-connection server daemon (147.75.109.163:50274). Sep 3 23:59:25.554403 sshd[1846]: Accepted publickey for core from 147.75.109.163 port 50274 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 3 23:59:25.556646 sshd-session[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:59:25.566163 systemd-logind[1531]: New session 8 of user core. Sep 3 23:59:25.573329 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 3 23:59:25.734845 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 3 23:59:25.735362 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:59:25.743315 sudo[1850]: pam_unix(sudo:session): session closed for user root Sep 3 23:59:25.756743 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 3 23:59:25.757245 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:59:25.771686 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:59:25.826141 augenrules[1872]: No rules Sep 3 23:59:25.825931 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:59:25.826452 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:59:25.827941 sudo[1849]: pam_unix(sudo:session): session closed for user root Sep 3 23:59:25.871549 sshd[1848]: Connection closed by 147.75.109.163 port 50274 Sep 3 23:59:25.872670 sshd-session[1846]: pam_unix(sshd:session): session closed for user core Sep 3 23:59:25.880016 systemd[1]: sshd@7-10.128.0.18:22-147.75.109.163:50274.service: Deactivated successfully. Sep 3 23:59:25.883397 systemd[1]: session-8.scope: Deactivated successfully. Sep 3 23:59:25.885613 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. Sep 3 23:59:25.888539 systemd-logind[1531]: Removed session 8. Sep 3 23:59:25.929037 systemd[1]: Started sshd@8-10.128.0.18:22-147.75.109.163:50280.service - OpenSSH per-connection server daemon (147.75.109.163:50280). Sep 3 23:59:26.247188 sshd[1881]: Accepted publickey for core from 147.75.109.163 port 50280 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 3 23:59:26.249211 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:59:26.256496 systemd-logind[1531]: New session 9 of user core. Sep 3 23:59:26.264431 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 3 23:59:26.433537 sudo[1884]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 3 23:59:26.434897 sudo[1884]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:59:26.992542 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 3 23:59:27.011870 (dockerd)[1902]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 3 23:59:27.390356 dockerd[1902]: time="2025-09-03T23:59:27.389932552Z" level=info msg="Starting up" Sep 3 23:59:27.391601 dockerd[1902]: time="2025-09-03T23:59:27.391569338Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 3 23:59:27.491750 dockerd[1902]: time="2025-09-03T23:59:27.491443566Z" level=info msg="Loading containers: start." Sep 3 23:59:27.510098 kernel: Initializing XFRM netlink socket Sep 3 23:59:27.915238 systemd-networkd[1457]: docker0: Link UP Sep 3 23:59:27.924982 dockerd[1902]: time="2025-09-03T23:59:27.924900051Z" level=info msg="Loading containers: done." Sep 3 23:59:27.954599 dockerd[1902]: time="2025-09-03T23:59:27.954528431Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 3 23:59:27.954860 dockerd[1902]: time="2025-09-03T23:59:27.954677689Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 3 23:59:27.954920 dockerd[1902]: time="2025-09-03T23:59:27.954870309Z" level=info msg="Initializing buildkit" Sep 3 23:59:27.998158 dockerd[1902]: time="2025-09-03T23:59:27.998093060Z" level=info msg="Completed buildkit initialization" Sep 3 23:59:28.006547 dockerd[1902]: time="2025-09-03T23:59:28.006474943Z" level=info msg="Daemon has completed initialization" Sep 3 23:59:28.007102 dockerd[1902]: time="2025-09-03T23:59:28.006651528Z" level=info msg="API listen on /run/docker.sock" Sep 3 23:59:28.006830 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 3 23:59:29.039284 containerd[1603]: time="2025-09-03T23:59:29.039216688Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 3 23:59:29.725221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525537828.mount: Deactivated successfully. Sep 3 23:59:31.697292 containerd[1603]: time="2025-09-03T23:59:31.697196465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:31.699099 containerd[1603]: time="2025-09-03T23:59:31.698960304Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30085292" Sep 3 23:59:31.701214 containerd[1603]: time="2025-09-03T23:59:31.701113868Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:31.705005 containerd[1603]: time="2025-09-03T23:59:31.704935685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:31.706651 containerd[1603]: time="2025-09-03T23:59:31.706369171Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 2.667088334s" Sep 3 23:59:31.706651 containerd[1603]: time="2025-09-03T23:59:31.706424123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 3 23:59:31.707574 containerd[1603]: time="2025-09-03T23:59:31.707519894Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 3 23:59:33.644090 containerd[1603]: time="2025-09-03T23:59:33.643891911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:33.645556 containerd[1603]: time="2025-09-03T23:59:33.645512894Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26020000" Sep 3 23:59:33.647206 containerd[1603]: time="2025-09-03T23:59:33.647167590Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:33.652695 containerd[1603]: time="2025-09-03T23:59:33.652620661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:33.654262 containerd[1603]: time="2025-09-03T23:59:33.654219386Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.946649032s" Sep 3 23:59:33.654435 containerd[1603]: time="2025-09-03T23:59:33.654410587Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 3 23:59:33.656106 containerd[1603]: time="2025-09-03T23:59:33.656033569Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 3 23:59:34.094604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 3 23:59:34.097627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:59:34.545305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:59:34.570948 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:59:34.689010 kubelet[2169]: E0903 23:59:34.688570 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:59:34.695613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:59:34.696199 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:59:34.697385 systemd[1]: kubelet.service: Consumed 262ms CPU time, 108.5M memory peak. Sep 3 23:59:35.436889 containerd[1603]: time="2025-09-03T23:59:35.436813747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:35.437876 containerd[1603]: time="2025-09-03T23:59:35.437772381Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20155827" Sep 3 23:59:35.440135 containerd[1603]: time="2025-09-03T23:59:35.439130908Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:35.443705 containerd[1603]: time="2025-09-03T23:59:35.443610377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:35.445632 containerd[1603]: time="2025-09-03T23:59:35.445447886Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.789349641s" Sep 3 23:59:35.445632 containerd[1603]: time="2025-09-03T23:59:35.445498274Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 3 23:59:35.446699 containerd[1603]: time="2025-09-03T23:59:35.446653409Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 3 23:59:36.737693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3372048012.mount: Deactivated successfully. Sep 3 23:59:37.571269 containerd[1603]: time="2025-09-03T23:59:37.571178667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:37.573022 containerd[1603]: time="2025-09-03T23:59:37.572720085Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31901521" Sep 3 23:59:37.574591 containerd[1603]: time="2025-09-03T23:59:37.574523084Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:37.579218 containerd[1603]: time="2025-09-03T23:59:37.579161680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:37.580328 containerd[1603]: time="2025-09-03T23:59:37.580272955Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.133395845s" Sep 3 23:59:37.580489 containerd[1603]: time="2025-09-03T23:59:37.580463494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 3 23:59:37.581323 containerd[1603]: time="2025-09-03T23:59:37.581265508Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 3 23:59:38.043523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410757948.mount: Deactivated successfully. Sep 3 23:59:39.528697 containerd[1603]: time="2025-09-03T23:59:39.528571357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:39.531090 containerd[1603]: time="2025-09-03T23:59:39.530697260Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20948880" Sep 3 23:59:39.535493 containerd[1603]: time="2025-09-03T23:59:39.534352089Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:39.540856 containerd[1603]: time="2025-09-03T23:59:39.540799494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:39.542654 containerd[1603]: time="2025-09-03T23:59:39.542609821Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.961300239s" Sep 3 23:59:39.542911 containerd[1603]: time="2025-09-03T23:59:39.542877413Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 3 23:59:39.543881 containerd[1603]: time="2025-09-03T23:59:39.543852174Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 3 23:59:40.011297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount867266448.mount: Deactivated successfully. Sep 3 23:59:40.018968 containerd[1603]: time="2025-09-03T23:59:40.018896066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:59:40.020642 containerd[1603]: time="2025-09-03T23:59:40.020324369Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 3 23:59:40.022613 containerd[1603]: time="2025-09-03T23:59:40.022570091Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:59:40.026410 containerd[1603]: time="2025-09-03T23:59:40.026368403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:59:40.028080 containerd[1603]: time="2025-09-03T23:59:40.028005192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 483.781591ms" Sep 3 23:59:40.028508 containerd[1603]: time="2025-09-03T23:59:40.028465107Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 3 23:59:40.029375 containerd[1603]: time="2025-09-03T23:59:40.029333537Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 3 23:59:40.494878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197276552.mount: Deactivated successfully. Sep 3 23:59:40.690556 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 3 23:59:42.996699 containerd[1603]: time="2025-09-03T23:59:42.996621441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:42.998414 containerd[1603]: time="2025-09-03T23:59:42.998078223Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58383547" Sep 3 23:59:43.000157 containerd[1603]: time="2025-09-03T23:59:43.000115222Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:43.004364 containerd[1603]: time="2025-09-03T23:59:43.004328453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:59:43.006034 containerd[1603]: time="2025-09-03T23:59:43.005987857Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.976607286s" Sep 3 23:59:43.006154 containerd[1603]: time="2025-09-03T23:59:43.006039849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 3 23:59:44.844371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 3 23:59:44.846837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:59:45.382562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:59:45.397986 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:59:45.495641 kubelet[2331]: E0903 23:59:45.495509 2331 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:59:45.500838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:59:45.501747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:59:45.502586 systemd[1]: kubelet.service: Consumed 268ms CPU time, 110.3M memory peak. Sep 3 23:59:46.355800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:59:46.356767 systemd[1]: kubelet.service: Consumed 268ms CPU time, 110.3M memory peak. Sep 3 23:59:46.360720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:59:46.402919 systemd[1]: Reload requested from client PID 2345 ('systemctl') (unit session-9.scope)... Sep 3 23:59:46.402950 systemd[1]: Reloading... Sep 3 23:59:46.631177 zram_generator::config[2392]: No configuration found. Sep 3 23:59:46.759989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:59:46.937641 systemd[1]: Reloading finished in 533 ms. Sep 3 23:59:47.022676 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 3 23:59:47.022835 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 3 23:59:47.023277 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:59:47.023354 systemd[1]: kubelet.service: Consumed 177ms CPU time, 98.3M memory peak. Sep 3 23:59:47.025886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:59:47.355438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:59:47.372045 (kubelet)[2440]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:59:47.446578 kubelet[2440]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:59:47.446578 kubelet[2440]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:59:47.446578 kubelet[2440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:59:47.447317 kubelet[2440]: I0903 23:59:47.446672 2440 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:59:47.983967 kubelet[2440]: I0903 23:59:47.983902 2440 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 3 23:59:47.983967 kubelet[2440]: I0903 23:59:47.983938 2440 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:59:47.984316 kubelet[2440]: I0903 23:59:47.984281 2440 server.go:956] "Client rotation is on, will bootstrap in background" Sep 3 23:59:48.038846 kubelet[2440]: E0903 23:59:48.038766 2440 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 3 23:59:48.043083 kubelet[2440]: I0903 23:59:48.042357 2440 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:59:48.057425 kubelet[2440]: I0903 23:59:48.057370 2440 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:59:48.064979 kubelet[2440]: I0903 23:59:48.064930 2440 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:59:48.065438 kubelet[2440]: I0903 23:59:48.065388 2440 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:59:48.065722 kubelet[2440]: I0903 23:59:48.065441 2440 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:59:48.065952 kubelet[2440]: I0903 23:59:48.065739 2440 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:59:48.065952 kubelet[2440]: I0903 23:59:48.065758 2440 container_manager_linux.go:303] "Creating device plugin manager" Sep 3 23:59:48.067493 kubelet[2440]: I0903 23:59:48.067435 2440 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:59:48.071144 kubelet[2440]: I0903 23:59:48.071088 2440 kubelet.go:480] "Attempting to sync node with API server" Sep 3 23:59:48.071259 kubelet[2440]: I0903 23:59:48.071165 2440 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:59:48.071259 kubelet[2440]: I0903 23:59:48.071213 2440 kubelet.go:386] "Adding apiserver pod source" Sep 3 23:59:48.073727 kubelet[2440]: I0903 23:59:48.073492 2440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:59:48.084103 kubelet[2440]: E0903 23:59:48.084034 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436&limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 3 23:59:48.087083 kubelet[2440]: E0903 23:59:48.086232 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 3 23:59:48.087083 kubelet[2440]: I0903 23:59:48.086364 2440 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:59:48.087242 kubelet[2440]: I0903 23:59:48.087142 2440 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 3 23:59:48.088979 kubelet[2440]: W0903 23:59:48.088925 2440 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 3 23:59:48.106317 kubelet[2440]: I0903 23:59:48.106237 2440 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:59:48.106317 kubelet[2440]: I0903 23:59:48.106319 2440 server.go:1289] "Started kubelet" Sep 3 23:59:48.108512 kubelet[2440]: I0903 23:59:48.108097 2440 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:59:48.109727 kubelet[2440]: I0903 23:59:48.109527 2440 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:59:48.110319 kubelet[2440]: I0903 23:59:48.110287 2440 server.go:317] "Adding debug handlers to kubelet server" Sep 3 23:59:48.110705 kubelet[2440]: I0903 23:59:48.110635 2440 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:59:48.113531 kubelet[2440]: I0903 23:59:48.113505 2440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:59:48.114329 kubelet[2440]: I0903 23:59:48.114299 2440 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:59:48.120212 kubelet[2440]: I0903 23:59:48.119824 2440 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:59:48.120212 kubelet[2440]: E0903 23:59:48.120040 2440 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" Sep 3 23:59:48.122219 kubelet[2440]: I0903 23:59:48.121330 2440 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:59:48.122219 kubelet[2440]: I0903 23:59:48.121425 2440 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:59:48.126912 kubelet[2440]: E0903 23:59:48.126865 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 3 23:59:48.127796 kubelet[2440]: E0903 23:59:48.126992 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436?timeout=10s\": dial tcp 10.128.0.18:6443: connect: connection refused" interval="200ms" Sep 3 23:59:48.131630 kubelet[2440]: E0903 23:59:48.126649 2440 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.18:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436.1861eb3f9d550f0a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436,UID:ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436,},FirstTimestamp:2025-09-03 23:59:48.106268426 +0000 UTC m=+0.727506680,LastTimestamp:2025-09-03 23:59:48.106268426 +0000 UTC m=+0.727506680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436,}" Sep 3 23:59:48.135589 kubelet[2440]: I0903 23:59:48.135563 2440 factory.go:223] Registration of the containerd container factory successfully Sep 3 23:59:48.135899 kubelet[2440]: I0903 23:59:48.135837 2440 factory.go:223] Registration of the systemd container factory successfully Sep 3 23:59:48.135992 kubelet[2440]: E0903 23:59:48.135957 2440 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:59:48.136544 kubelet[2440]: I0903 23:59:48.136519 2440 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:59:48.168791 kubelet[2440]: I0903 23:59:48.168737 2440 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:59:48.168791 kubelet[2440]: I0903 23:59:48.168764 2440 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:59:48.168791 kubelet[2440]: I0903 23:59:48.168797 2440 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:59:48.171721 kubelet[2440]: I0903 23:59:48.171678 2440 policy_none.go:49] "None policy: Start" Sep 3 23:59:48.171721 kubelet[2440]: I0903 23:59:48.171707 2440 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:59:48.171721 kubelet[2440]: I0903 23:59:48.171726 2440 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:59:48.187808 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 3 23:59:48.188700 kubelet[2440]: I0903 23:59:48.188592 2440 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 3 23:59:48.191716 kubelet[2440]: I0903 23:59:48.191620 2440 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 3 23:59:48.191716 kubelet[2440]: I0903 23:59:48.191665 2440 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 3 23:59:48.191716 kubelet[2440]: I0903 23:59:48.191697 2440 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:59:48.191716 kubelet[2440]: I0903 23:59:48.191710 2440 kubelet.go:2436] "Starting kubelet main sync loop" Sep 3 23:59:48.192091 kubelet[2440]: E0903 23:59:48.191790 2440 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:59:48.200019 kubelet[2440]: E0903 23:59:48.199622 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 3 23:59:48.205996 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 3 23:59:48.211974 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 3 23:59:48.220198 kubelet[2440]: E0903 23:59:48.220146 2440 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" Sep 3 23:59:48.228464 kubelet[2440]: E0903 23:59:48.227619 2440 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 3 23:59:48.228464 kubelet[2440]: I0903 23:59:48.228120 2440 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:59:48.228464 kubelet[2440]: I0903 23:59:48.228140 2440 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:59:48.228464 kubelet[2440]: I0903 23:59:48.228476 2440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:59:48.230938 kubelet[2440]: E0903 23:59:48.230425 2440 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:59:48.230938 kubelet[2440]: E0903 23:59:48.230520 2440 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" Sep 3 23:59:48.320542 systemd[1]: Created slice kubepods-burstable-pod727627f22e2e98f017dd1b437e48e656.slice - libcontainer container kubepods-burstable-pod727627f22e2e98f017dd1b437e48e656.slice. Sep 3 23:59:48.324303 kubelet[2440]: I0903 23:59:48.323229 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40898004a51e6da3c926b848b352cfbd-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"40898004a51e6da3c926b848b352cfbd\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.324303 kubelet[2440]: I0903 23:59:48.323281 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40898004a51e6da3c926b848b352cfbd-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"40898004a51e6da3c926b848b352cfbd\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.324303 kubelet[2440]: I0903 23:59:48.323315 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/727627f22e2e98f017dd1b437e48e656-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"727627f22e2e98f017dd1b437e48e656\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.324303 kubelet[2440]: I0903 23:59:48.323345 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/727627f22e2e98f017dd1b437e48e656-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"727627f22e2e98f017dd1b437e48e656\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.324586 kubelet[2440]: I0903 23:59:48.323392 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/727627f22e2e98f017dd1b437e48e656-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"727627f22e2e98f017dd1b437e48e656\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.324586 kubelet[2440]: I0903 23:59:48.323425 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40898004a51e6da3c926b848b352cfbd-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"40898004a51e6da3c926b848b352cfbd\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.324586 kubelet[2440]: I0903 23:59:48.323460 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40898004a51e6da3c926b848b352cfbd-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"40898004a51e6da3c926b848b352cfbd\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.328752 kubelet[2440]: E0903 23:59:48.328672 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436?timeout=10s\": dial tcp 10.128.0.18:6443: connect: connection refused" interval="400ms" Sep 3 23:59:48.331960 kubelet[2440]: E0903 23:59:48.331660 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.337269 kubelet[2440]: I0903 23:59:48.337194 2440 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.337869 kubelet[2440]: E0903 23:59:48.337831 2440 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.18:6443/api/v1/nodes\": dial tcp 10.128.0.18:6443: connect: connection refused" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.341774 systemd[1]: Created slice kubepods-burstable-pod40898004a51e6da3c926b848b352cfbd.slice - libcontainer container kubepods-burstable-pod40898004a51e6da3c926b848b352cfbd.slice. Sep 3 23:59:48.345864 kubelet[2440]: E0903 23:59:48.345836 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.349293 systemd[1]: Created slice kubepods-burstable-poda90a434ccad47286bf9d13b5f67f34cd.slice - libcontainer container kubepods-burstable-poda90a434ccad47286bf9d13b5f67f34cd.slice. Sep 3 23:59:48.352288 kubelet[2440]: E0903 23:59:48.352260 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.424269 kubelet[2440]: I0903 23:59:48.424103 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40898004a51e6da3c926b848b352cfbd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"40898004a51e6da3c926b848b352cfbd\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.424269 kubelet[2440]: I0903 23:59:48.424160 2440 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a90a434ccad47286bf9d13b5f67f34cd-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"a90a434ccad47286bf9d13b5f67f34cd\") " pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.543158 kubelet[2440]: I0903 23:59:48.543117 2440 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.543765 kubelet[2440]: E0903 23:59:48.543574 2440 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.18:6443/api/v1/nodes\": dial tcp 10.128.0.18:6443: connect: connection refused" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.634036 containerd[1603]: time="2025-09-03T23:59:48.633851620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436,Uid:727627f22e2e98f017dd1b437e48e656,Namespace:kube-system,Attempt:0,}" Sep 3 23:59:48.648612 containerd[1603]: time="2025-09-03T23:59:48.648452046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436,Uid:40898004a51e6da3c926b848b352cfbd,Namespace:kube-system,Attempt:0,}" Sep 3 23:59:48.654401 containerd[1603]: time="2025-09-03T23:59:48.654284474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436,Uid:a90a434ccad47286bf9d13b5f67f34cd,Namespace:kube-system,Attempt:0,}" Sep 3 23:59:48.709688 containerd[1603]: time="2025-09-03T23:59:48.709353390Z" level=info msg="connecting to shim 9a4890e03da160d22bcab8808eb8aa8200e93af61cd83e3adff14b9273d14947" address="unix:///run/containerd/s/b40a81b3e6801ac6ab8041d36d0c8b9d7b6b1e042d13da3edca840bc4503def1" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:59:48.734082 kubelet[2440]: E0903 23:59:48.733416 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436?timeout=10s\": dial tcp 10.128.0.18:6443: connect: connection refused" interval="800ms" Sep 3 23:59:48.757416 containerd[1603]: time="2025-09-03T23:59:48.757337808Z" level=info msg="connecting to shim c7d7749d6cc23c9ddfc919005b54344979dfb28d8d0ca2259594622e421509dc" address="unix:///run/containerd/s/86bab1fed154fdd06e5b3ff7b37647bf27f1cd33f9d04dc74d939df520f98c45" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:59:48.768436 systemd[1]: Started cri-containerd-9a4890e03da160d22bcab8808eb8aa8200e93af61cd83e3adff14b9273d14947.scope - libcontainer container 9a4890e03da160d22bcab8808eb8aa8200e93af61cd83e3adff14b9273d14947. Sep 3 23:59:48.776712 containerd[1603]: time="2025-09-03T23:59:48.776641702Z" level=info msg="connecting to shim 6cdb94ad2f94e21119ecf0cf4f90ce0e15fe8e4930d985b995e57a75f5ed8045" address="unix:///run/containerd/s/22c6b862b0359e1f3f59e89b55cebd078d5893bfb4fa0d3c6810ba0e609c5b69" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:59:48.839080 systemd[1]: Started cri-containerd-c7d7749d6cc23c9ddfc919005b54344979dfb28d8d0ca2259594622e421509dc.scope - libcontainer container c7d7749d6cc23c9ddfc919005b54344979dfb28d8d0ca2259594622e421509dc. Sep 3 23:59:48.856256 systemd[1]: Started cri-containerd-6cdb94ad2f94e21119ecf0cf4f90ce0e15fe8e4930d985b995e57a75f5ed8045.scope - libcontainer container 6cdb94ad2f94e21119ecf0cf4f90ce0e15fe8e4930d985b995e57a75f5ed8045. Sep 3 23:59:48.951859 kubelet[2440]: I0903 23:59:48.951722 2440 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.953598 kubelet[2440]: E0903 23:59:48.953546 2440 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.18:6443/api/v1/nodes\": dial tcp 10.128.0.18:6443: connect: connection refused" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:48.957554 containerd[1603]: time="2025-09-03T23:59:48.957489794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436,Uid:727627f22e2e98f017dd1b437e48e656,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a4890e03da160d22bcab8808eb8aa8200e93af61cd83e3adff14b9273d14947\"" Sep 3 23:59:48.962044 kubelet[2440]: E0903 23:59:48.962007 2440 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e" Sep 3 23:59:48.966331 containerd[1603]: time="2025-09-03T23:59:48.966283139Z" level=info msg="CreateContainer within sandbox \"9a4890e03da160d22bcab8808eb8aa8200e93af61cd83e3adff14b9273d14947\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 3 23:59:48.983259 containerd[1603]: time="2025-09-03T23:59:48.982299737Z" level=info msg="Container f37436f229b7301a8e61955b57171a22b8781df41350bfb0f67c04d6c4e962ec: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:59:49.003616 containerd[1603]: time="2025-09-03T23:59:49.003569334Z" level=info msg="CreateContainer within sandbox \"9a4890e03da160d22bcab8808eb8aa8200e93af61cd83e3adff14b9273d14947\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f37436f229b7301a8e61955b57171a22b8781df41350bfb0f67c04d6c4e962ec\"" Sep 3 23:59:49.008897 containerd[1603]: time="2025-09-03T23:59:49.008819154Z" level=info msg="StartContainer for \"f37436f229b7301a8e61955b57171a22b8781df41350bfb0f67c04d6c4e962ec\"" Sep 3 23:59:49.012596 containerd[1603]: time="2025-09-03T23:59:49.012536267Z" level=info msg="connecting to shim f37436f229b7301a8e61955b57171a22b8781df41350bfb0f67c04d6c4e962ec" address="unix:///run/containerd/s/b40a81b3e6801ac6ab8041d36d0c8b9d7b6b1e042d13da3edca840bc4503def1" protocol=ttrpc version=3 Sep 3 23:59:49.014860 containerd[1603]: time="2025-09-03T23:59:49.014808192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436,Uid:a90a434ccad47286bf9d13b5f67f34cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cdb94ad2f94e21119ecf0cf4f90ce0e15fe8e4930d985b995e57a75f5ed8045\"" Sep 3 23:59:49.018076 kubelet[2440]: E0903 23:59:49.017484 2440 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e" Sep 3 23:59:49.019711 containerd[1603]: time="2025-09-03T23:59:49.019562545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436,Uid:40898004a51e6da3c926b848b352cfbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7d7749d6cc23c9ddfc919005b54344979dfb28d8d0ca2259594622e421509dc\"" Sep 3 23:59:49.022084 kubelet[2440]: E0903 23:59:49.021831 2440 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023" Sep 3 23:59:49.023353 containerd[1603]: time="2025-09-03T23:59:49.022615420Z" level=info msg="CreateContainer within sandbox \"6cdb94ad2f94e21119ecf0cf4f90ce0e15fe8e4930d985b995e57a75f5ed8045\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 3 23:59:49.036333 containerd[1603]: time="2025-09-03T23:59:49.036286348Z" level=info msg="CreateContainer within sandbox \"c7d7749d6cc23c9ddfc919005b54344979dfb28d8d0ca2259594622e421509dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 3 23:59:49.042924 containerd[1603]: time="2025-09-03T23:59:49.042884266Z" level=info msg="Container 0ad4f75c45084898b136931c6c167d67d2f16d98910d17f1375adbf0b101e86b: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:59:49.054696 systemd[1]: Started cri-containerd-f37436f229b7301a8e61955b57171a22b8781df41350bfb0f67c04d6c4e962ec.scope - libcontainer container f37436f229b7301a8e61955b57171a22b8781df41350bfb0f67c04d6c4e962ec. Sep 3 23:59:49.058713 containerd[1603]: time="2025-09-03T23:59:49.058287260Z" level=info msg="CreateContainer within sandbox \"6cdb94ad2f94e21119ecf0cf4f90ce0e15fe8e4930d985b995e57a75f5ed8045\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ad4f75c45084898b136931c6c167d67d2f16d98910d17f1375adbf0b101e86b\"" Sep 3 23:59:49.060849 containerd[1603]: time="2025-09-03T23:59:49.060806270Z" level=info msg="StartContainer for \"0ad4f75c45084898b136931c6c167d67d2f16d98910d17f1375adbf0b101e86b\"" Sep 3 23:59:49.063139 containerd[1603]: time="2025-09-03T23:59:49.063074533Z" level=info msg="connecting to shim 0ad4f75c45084898b136931c6c167d67d2f16d98910d17f1375adbf0b101e86b" address="unix:///run/containerd/s/22c6b862b0359e1f3f59e89b55cebd078d5893bfb4fa0d3c6810ba0e609c5b69" protocol=ttrpc version=3 Sep 3 23:59:49.064981 containerd[1603]: time="2025-09-03T23:59:49.064938584Z" level=info msg="Container ef706ceaeceb5ffb8ae580444210f8cdd3e392c08b66db957ea11dbafbc4ea83: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:59:49.081980 containerd[1603]: time="2025-09-03T23:59:49.081900832Z" level=info msg="CreateContainer within sandbox \"c7d7749d6cc23c9ddfc919005b54344979dfb28d8d0ca2259594622e421509dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef706ceaeceb5ffb8ae580444210f8cdd3e392c08b66db957ea11dbafbc4ea83\"" Sep 3 23:59:49.083927 containerd[1603]: time="2025-09-03T23:59:49.083866874Z" level=info msg="StartContainer for \"ef706ceaeceb5ffb8ae580444210f8cdd3e392c08b66db957ea11dbafbc4ea83\"" Sep 3 23:59:49.092139 containerd[1603]: time="2025-09-03T23:59:49.092086660Z" level=info msg="connecting to shim ef706ceaeceb5ffb8ae580444210f8cdd3e392c08b66db957ea11dbafbc4ea83" address="unix:///run/containerd/s/86bab1fed154fdd06e5b3ff7b37647bf27f1cd33f9d04dc74d939df520f98c45" protocol=ttrpc version=3 Sep 3 23:59:49.104341 systemd[1]: Started cri-containerd-0ad4f75c45084898b136931c6c167d67d2f16d98910d17f1375adbf0b101e86b.scope - libcontainer container 0ad4f75c45084898b136931c6c167d67d2f16d98910d17f1375adbf0b101e86b. Sep 3 23:59:49.142385 kubelet[2440]: E0903 23:59:49.142323 2440 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436&limit=500&resourceVersion=0\": dial tcp 10.128.0.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 3 23:59:49.145332 systemd[1]: Started cri-containerd-ef706ceaeceb5ffb8ae580444210f8cdd3e392c08b66db957ea11dbafbc4ea83.scope - libcontainer container ef706ceaeceb5ffb8ae580444210f8cdd3e392c08b66db957ea11dbafbc4ea83. Sep 3 23:59:49.224577 containerd[1603]: time="2025-09-03T23:59:49.223017058Z" level=info msg="StartContainer for \"f37436f229b7301a8e61955b57171a22b8781df41350bfb0f67c04d6c4e962ec\" returns successfully" Sep 3 23:59:49.278514 containerd[1603]: time="2025-09-03T23:59:49.278444699Z" level=info msg="StartContainer for \"0ad4f75c45084898b136931c6c167d67d2f16d98910d17f1375adbf0b101e86b\" returns successfully" Sep 3 23:59:49.339291 containerd[1603]: time="2025-09-03T23:59:49.339226100Z" level=info msg="StartContainer for \"ef706ceaeceb5ffb8ae580444210f8cdd3e392c08b66db957ea11dbafbc4ea83\" returns successfully" Sep 3 23:59:49.760112 kubelet[2440]: I0903 23:59:49.759509 2440 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:50.250076 kubelet[2440]: E0903 23:59:50.248829 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:50.252087 kubelet[2440]: E0903 23:59:50.250753 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:50.252087 kubelet[2440]: E0903 23:59:50.250946 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:51.257729 kubelet[2440]: E0903 23:59:51.257627 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:51.262808 kubelet[2440]: E0903 23:59:51.258970 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:51.262808 kubelet[2440]: E0903 23:59:51.262603 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:52.262830 kubelet[2440]: E0903 23:59:52.262784 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:52.266078 kubelet[2440]: E0903 23:59:52.263864 2440 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:53.746582 kubelet[2440]: E0903 23:59:53.746486 2440 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:53.770518 kubelet[2440]: I0903 23:59:53.770467 2440 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:53.770721 kubelet[2440]: E0903 23:59:53.770559 2440 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\": node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" Sep 3 23:59:53.823119 kubelet[2440]: I0903 23:59:53.821259 2440 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:53.842257 kubelet[2440]: E0903 23:59:53.842200 2440 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:53.842257 kubelet[2440]: I0903 23:59:53.842252 2440 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:53.849148 kubelet[2440]: E0903 23:59:53.848908 2440 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:53.849148 kubelet[2440]: I0903 23:59:53.848938 2440 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:53.851811 kubelet[2440]: E0903 23:59:53.851778 2440 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:54.089164 kubelet[2440]: I0903 23:59:54.087560 2440 apiserver.go:52] "Watching apiserver" Sep 3 23:59:54.122575 kubelet[2440]: I0903 23:59:54.122472 2440 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:59:54.137733 update_engine[1533]: I20250903 23:59:54.137197 1533 update_attempter.cc:509] Updating boot flags... Sep 3 23:59:55.621432 kubelet[2440]: I0903 23:59:55.621395 2440 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:55.632072 kubelet[2440]: I0903 23:59:55.632008 2440 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 3 23:59:56.214677 systemd[1]: Reload requested from client PID 2747 ('systemctl') (unit session-9.scope)... Sep 3 23:59:56.214715 systemd[1]: Reloading... Sep 3 23:59:56.402186 zram_generator::config[2791]: No configuration found. Sep 3 23:59:56.562262 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:59:56.792404 systemd[1]: Reloading finished in 576 ms. Sep 3 23:59:56.834534 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:59:56.849856 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:59:56.850822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:59:56.850970 systemd[1]: kubelet.service: Consumed 1.406s CPU time, 132.8M memory peak. Sep 3 23:59:56.856034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:59:57.251725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:59:57.267850 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:59:57.379427 kubelet[2839]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:59:57.379427 kubelet[2839]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:59:57.379427 kubelet[2839]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:59:57.379996 kubelet[2839]: I0903 23:59:57.379547 2839 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:59:57.402092 kubelet[2839]: I0903 23:59:57.400230 2839 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 3 23:59:57.402092 kubelet[2839]: I0903 23:59:57.400263 2839 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:59:57.402092 kubelet[2839]: I0903 23:59:57.400573 2839 server.go:956] "Client rotation is on, will bootstrap in background" Sep 3 23:59:57.411093 kubelet[2839]: I0903 23:59:57.405551 2839 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 3 23:59:57.431112 kubelet[2839]: I0903 23:59:57.429564 2839 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:59:57.445271 kubelet[2839]: I0903 23:59:57.445224 2839 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:59:57.466255 kubelet[2839]: I0903 23:59:57.466210 2839 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:59:57.466628 kubelet[2839]: I0903 23:59:57.466583 2839 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:59:57.467022 kubelet[2839]: I0903 23:59:57.466630 2839 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:59:57.467229 kubelet[2839]: I0903 23:59:57.467041 2839 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:59:57.467229 kubelet[2839]: I0903 23:59:57.467078 2839 container_manager_linux.go:303] "Creating device plugin manager" Sep 3 23:59:57.467229 kubelet[2839]: I0903 23:59:57.467149 2839 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:59:57.467405 kubelet[2839]: I0903 23:59:57.467382 2839 kubelet.go:480] "Attempting to sync node with API server" Sep 3 23:59:57.467471 kubelet[2839]: I0903 23:59:57.467414 2839 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:59:57.467471 kubelet[2839]: I0903 23:59:57.467453 2839 kubelet.go:386] "Adding apiserver pod source" Sep 3 23:59:57.467586 kubelet[2839]: I0903 23:59:57.467480 2839 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:59:57.474887 sudo[2853]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 3 23:59:57.477276 kubelet[2839]: I0903 23:59:57.475221 2839 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:59:57.477276 kubelet[2839]: I0903 23:59:57.475969 2839 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 3 23:59:57.475516 sudo[2853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 3 23:59:57.510794 kubelet[2839]: I0903 23:59:57.510616 2839 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:59:57.510794 kubelet[2839]: I0903 23:59:57.510720 2839 server.go:1289] "Started kubelet" Sep 3 23:59:57.512571 kubelet[2839]: I0903 23:59:57.512194 2839 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:59:57.516346 kubelet[2839]: I0903 23:59:57.515662 2839 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:59:57.526096 kubelet[2839]: I0903 23:59:57.525645 2839 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:59:57.526096 kubelet[2839]: I0903 23:59:57.519551 2839 server.go:317] "Adding debug handlers to kubelet server" Sep 3 23:59:57.545787 kubelet[2839]: I0903 23:59:57.545543 2839 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:59:57.565040 kubelet[2839]: I0903 23:59:57.564264 2839 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:59:57.565040 kubelet[2839]: I0903 23:59:57.564944 2839 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:59:57.565337 kubelet[2839]: E0903 23:59:57.565225 2839 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" not found" Sep 3 23:59:57.569570 kubelet[2839]: I0903 23:59:57.567789 2839 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:59:57.569570 kubelet[2839]: I0903 23:59:57.567972 2839 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:59:57.573704 kubelet[2839]: I0903 23:59:57.573666 2839 factory.go:223] Registration of the systemd container factory successfully Sep 3 23:59:57.575482 kubelet[2839]: I0903 23:59:57.574174 2839 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:59:57.581610 kubelet[2839]: I0903 23:59:57.581042 2839 factory.go:223] Registration of the containerd container factory successfully Sep 3 23:59:57.586219 kubelet[2839]: E0903 23:59:57.586098 2839 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:59:57.641764 kubelet[2839]: I0903 23:59:57.641658 2839 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 3 23:59:57.646826 kubelet[2839]: I0903 23:59:57.645830 2839 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 3 23:59:57.646826 kubelet[2839]: I0903 23:59:57.645860 2839 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 3 23:59:57.646826 kubelet[2839]: I0903 23:59:57.645891 2839 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:59:57.646826 kubelet[2839]: I0903 23:59:57.645902 2839 kubelet.go:2436] "Starting kubelet main sync loop" Sep 3 23:59:57.646826 kubelet[2839]: E0903 23:59:57.645963 2839 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:59:57.723578 kubelet[2839]: I0903 23:59:57.723540 2839 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:59:57.723578 kubelet[2839]: I0903 23:59:57.723569 2839 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:59:57.723805 kubelet[2839]: I0903 23:59:57.723596 2839 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:59:57.723805 kubelet[2839]: I0903 23:59:57.723763 2839 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 3 23:59:57.723805 kubelet[2839]: I0903 23:59:57.723779 2839 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 3 23:59:57.723805 kubelet[2839]: I0903 23:59:57.723801 2839 policy_none.go:49] "None policy: Start" Sep 3 23:59:57.723997 kubelet[2839]: I0903 23:59:57.723816 2839 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:59:57.723997 kubelet[2839]: I0903 23:59:57.723832 2839 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:59:57.723997 kubelet[2839]: I0903 23:59:57.723986 2839 state_mem.go:75] "Updated machine memory state" Sep 3 23:59:57.732440 kubelet[2839]: E0903 23:59:57.732197 2839 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 3 23:59:57.732440 kubelet[2839]: I0903 23:59:57.732395 2839 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:59:57.732440 kubelet[2839]: I0903 23:59:57.732411 2839 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:59:57.735475 kubelet[2839]: I0903 23:59:57.735285 2839 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:59:57.739551 kubelet[2839]: E0903 23:59:57.739496 2839 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:59:57.750859 kubelet[2839]: I0903 23:59:57.750495 2839 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.752317 kubelet[2839]: I0903 23:59:57.752216 2839 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.753131 kubelet[2839]: I0903 23:59:57.753008 2839 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.771424 kubelet[2839]: I0903 23:59:57.771292 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/727627f22e2e98f017dd1b437e48e656-k8s-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"727627f22e2e98f017dd1b437e48e656\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.771424 kubelet[2839]: I0903 23:59:57.771350 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40898004a51e6da3c926b848b352cfbd-ca-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"40898004a51e6da3c926b848b352cfbd\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.771424 kubelet[2839]: I0903 23:59:57.771379 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40898004a51e6da3c926b848b352cfbd-k8s-certs\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"40898004a51e6da3c926b848b352cfbd\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.772645 kubelet[2839]: I0903 23:59:57.772334 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a90a434ccad47286bf9d13b5f67f34cd-kubeconfig\") pod \"kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"a90a434ccad47286bf9d13b5f67f34cd\") " pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.773582 kubelet[2839]: I0903 23:59:57.773540 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/727627f22e2e98f017dd1b437e48e656-ca-certs\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"727627f22e2e98f017dd1b437e48e656\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.773693 kubelet[2839]: I0903 23:59:57.773594 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/727627f22e2e98f017dd1b437e48e656-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"727627f22e2e98f017dd1b437e48e656\") " pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.773693 kubelet[2839]: I0903 23:59:57.773625 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40898004a51e6da3c926b848b352cfbd-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"40898004a51e6da3c926b848b352cfbd\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.773693 kubelet[2839]: I0903 23:59:57.773654 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40898004a51e6da3c926b848b352cfbd-kubeconfig\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"40898004a51e6da3c926b848b352cfbd\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.773883 kubelet[2839]: I0903 23:59:57.773691 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40898004a51e6da3c926b848b352cfbd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" (UID: \"40898004a51e6da3c926b848b352cfbd\") " pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.785880 kubelet[2839]: I0903 23:59:57.783921 2839 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 3 23:59:57.794494 kubelet[2839]: I0903 23:59:57.794037 2839 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 3 23:59:57.800279 kubelet[2839]: I0903 23:59:57.800226 2839 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 3 23:59:57.800445 kubelet[2839]: E0903 23:59:57.800312 2839 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" already exists" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.863452 kubelet[2839]: I0903 23:59:57.863261 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.880164 kubelet[2839]: I0903 23:59:57.880023 2839 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:57.881217 kubelet[2839]: I0903 23:59:57.880690 2839 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" Sep 3 23:59:58.421407 sudo[2853]: pam_unix(sudo:session): session closed for user root Sep 3 23:59:58.470983 kubelet[2839]: I0903 23:59:58.470891 2839 apiserver.go:52] "Watching apiserver" Sep 3 23:59:58.568300 kubelet[2839]: I0903 23:59:58.568235 2839 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:59:58.752099 kubelet[2839]: I0903 23:59:58.751090 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" podStartSLOduration=1.751065286 podStartE2EDuration="1.751065286s" podCreationTimestamp="2025-09-03 23:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:59:58.736805636 +0000 UTC m=+1.459701985" watchObservedRunningTime="2025-09-03 23:59:58.751065286 +0000 UTC m=+1.473961630" Sep 3 23:59:58.766481 kubelet[2839]: I0903 23:59:58.766290 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" podStartSLOduration=1.766260798 podStartE2EDuration="1.766260798s" podCreationTimestamp="2025-09-03 23:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:59:58.765881031 +0000 UTC m=+1.488777376" watchObservedRunningTime="2025-09-03 23:59:58.766260798 +0000 UTC m=+1.489157134" Sep 3 23:59:58.767293 kubelet[2839]: I0903 23:59:58.767221 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" podStartSLOduration=3.767205282 podStartE2EDuration="3.767205282s" podCreationTimestamp="2025-09-03 23:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:59:58.751983995 +0000 UTC m=+1.474880341" watchObservedRunningTime="2025-09-03 23:59:58.767205282 +0000 UTC m=+1.490101628" Sep 4 00:00:00.467970 sudo[1884]: pam_unix(sudo:session): session closed for user root Sep 4 00:00:00.512223 sshd[1883]: Connection closed by 147.75.109.163 port 50280 Sep 4 00:00:00.513393 sshd-session[1881]: pam_unix(sshd:session): session closed for user core Sep 4 00:00:00.522531 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. Sep 4 00:00:00.523418 systemd[1]: sshd@8-10.128.0.18:22-147.75.109.163:50280.service: Deactivated successfully. Sep 4 00:00:00.527902 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 00:00:00.528591 systemd[1]: session-9.scope: Consumed 6.589s CPU time, 277.6M memory peak. Sep 4 00:00:00.533152 systemd-logind[1531]: Removed session 9. Sep 4 00:00:00.954166 kubelet[2839]: I0904 00:00:00.954114 2839 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 00:00:00.954946 containerd[1603]: time="2025-09-04T00:00:00.954884800Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 00:00:00.955497 kubelet[2839]: I0904 00:00:00.955460 2839 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 00:00:01.901411 kubelet[2839]: I0904 00:00:01.901308 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1edb915b-9fce-49f6-904f-712998247a98-xtables-lock\") pod \"kube-proxy-92jpv\" (UID: \"1edb915b-9fce-49f6-904f-712998247a98\") " pod="kube-system/kube-proxy-92jpv" Sep 4 00:00:01.903385 kubelet[2839]: I0904 00:00:01.903178 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1edb915b-9fce-49f6-904f-712998247a98-lib-modules\") pod \"kube-proxy-92jpv\" (UID: \"1edb915b-9fce-49f6-904f-712998247a98\") " pod="kube-system/kube-proxy-92jpv" Sep 4 00:00:01.903385 kubelet[2839]: I0904 00:00:01.903235 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9bwh\" (UniqueName: \"kubernetes.io/projected/1edb915b-9fce-49f6-904f-712998247a98-kube-api-access-l9bwh\") pod \"kube-proxy-92jpv\" (UID: \"1edb915b-9fce-49f6-904f-712998247a98\") " pod="kube-system/kube-proxy-92jpv" Sep 4 00:00:01.903385 kubelet[2839]: I0904 00:00:01.903281 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1edb915b-9fce-49f6-904f-712998247a98-kube-proxy\") pod \"kube-proxy-92jpv\" (UID: \"1edb915b-9fce-49f6-904f-712998247a98\") " pod="kube-system/kube-proxy-92jpv" Sep 4 00:00:01.903545 systemd[1]: Created slice kubepods-besteffort-pod1edb915b_9fce_49f6_904f_712998247a98.slice - libcontainer container kubepods-besteffort-pod1edb915b_9fce_49f6_904f_712998247a98.slice. Sep 4 00:00:01.942984 systemd[1]: Created slice kubepods-burstable-pod3e4318e3_c789_4c2e_885d_5a3aca4657bc.slice - libcontainer container kubepods-burstable-pod3e4318e3_c789_4c2e_885d_5a3aca4657bc.slice. Sep 4 00:00:02.005483 kubelet[2839]: I0904 00:00:02.004858 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cni-path\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.005483 kubelet[2839]: I0904 00:00:02.004926 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-host-proc-sys-kernel\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.005483 kubelet[2839]: I0904 00:00:02.004953 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9whdx\" (UniqueName: \"kubernetes.io/projected/3e4318e3-c789-4c2e-885d-5a3aca4657bc-kube-api-access-9whdx\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.005483 kubelet[2839]: I0904 00:00:02.004981 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-bpf-maps\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.005483 kubelet[2839]: I0904 00:00:02.005008 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-cgroup\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.005483 kubelet[2839]: I0904 00:00:02.005035 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-etc-cni-netd\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.006368 kubelet[2839]: I0904 00:00:02.005074 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e4318e3-c789-4c2e-885d-5a3aca4657bc-clustermesh-secrets\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.006368 kubelet[2839]: I0904 00:00:02.005105 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-run\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.006368 kubelet[2839]: I0904 00:00:02.005131 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-host-proc-sys-net\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.006368 kubelet[2839]: I0904 00:00:02.005160 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e4318e3-c789-4c2e-885d-5a3aca4657bc-hubble-tls\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.006368 kubelet[2839]: I0904 00:00:02.005206 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-hostproc\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.006368 kubelet[2839]: I0904 00:00:02.005239 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-lib-modules\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.006679 kubelet[2839]: I0904 00:00:02.005265 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-xtables-lock\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.006679 kubelet[2839]: I0904 00:00:02.005289 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-config-path\") pod \"cilium-dfz2h\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " pod="kube-system/cilium-dfz2h" Sep 4 00:00:02.176725 systemd[1]: Created slice kubepods-besteffort-pod4d483060_a36a_42d0_821e_31a1ebfed99c.slice - libcontainer container kubepods-besteffort-pod4d483060_a36a_42d0_821e_31a1ebfed99c.slice. Sep 4 00:00:02.207159 kubelet[2839]: I0904 00:00:02.207096 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d483060-a36a-42d0-821e-31a1ebfed99c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jrm56\" (UID: \"4d483060-a36a-42d0-821e-31a1ebfed99c\") " pod="kube-system/cilium-operator-6c4d7847fc-jrm56" Sep 4 00:00:02.207159 kubelet[2839]: I0904 00:00:02.207162 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrzwb\" (UniqueName: \"kubernetes.io/projected/4d483060-a36a-42d0-821e-31a1ebfed99c-kube-api-access-wrzwb\") pod \"cilium-operator-6c4d7847fc-jrm56\" (UID: \"4d483060-a36a-42d0-821e-31a1ebfed99c\") " pod="kube-system/cilium-operator-6c4d7847fc-jrm56" Sep 4 00:00:02.219108 containerd[1603]: time="2025-09-04T00:00:02.217799364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92jpv,Uid:1edb915b-9fce-49f6-904f-712998247a98,Namespace:kube-system,Attempt:0,}" Sep 4 00:00:02.251435 containerd[1603]: time="2025-09-04T00:00:02.251386552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfz2h,Uid:3e4318e3-c789-4c2e-885d-5a3aca4657bc,Namespace:kube-system,Attempt:0,}" Sep 4 00:00:02.297732 containerd[1603]: time="2025-09-04T00:00:02.297305341Z" level=info msg="connecting to shim 9c12c2b21054c293553b55e1e1ce9745e5a9b0b558858a91b14c92529290efd7" address="unix:///run/containerd/s/fd24187c8f92e0d783dea64abff0916d6047f6ceb4be61eab732b3fba3e0a75e" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:00:02.329970 containerd[1603]: time="2025-09-04T00:00:02.329535248Z" level=info msg="connecting to shim 239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae" address="unix:///run/containerd/s/a1b4151a781cf1fccc74fcf6efb52d8b0d09b2048050a78441ab6d7df007e7f2" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:00:02.383389 systemd[1]: Started cri-containerd-9c12c2b21054c293553b55e1e1ce9745e5a9b0b558858a91b14c92529290efd7.scope - libcontainer container 9c12c2b21054c293553b55e1e1ce9745e5a9b0b558858a91b14c92529290efd7. Sep 4 00:00:02.397396 systemd[1]: Started cri-containerd-239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae.scope - libcontainer container 239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae. Sep 4 00:00:02.464550 containerd[1603]: time="2025-09-04T00:00:02.463566282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92jpv,Uid:1edb915b-9fce-49f6-904f-712998247a98,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c12c2b21054c293553b55e1e1ce9745e5a9b0b558858a91b14c92529290efd7\"" Sep 4 00:00:02.471377 containerd[1603]: time="2025-09-04T00:00:02.471311627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfz2h,Uid:3e4318e3-c789-4c2e-885d-5a3aca4657bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\"" Sep 4 00:00:02.480008 containerd[1603]: time="2025-09-04T00:00:02.479871462Z" level=info msg="CreateContainer within sandbox \"9c12c2b21054c293553b55e1e1ce9745e5a9b0b558858a91b14c92529290efd7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 00:00:02.480337 containerd[1603]: time="2025-09-04T00:00:02.480293223Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 00:00:02.494146 containerd[1603]: time="2025-09-04T00:00:02.493891803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jrm56,Uid:4d483060-a36a-42d0-821e-31a1ebfed99c,Namespace:kube-system,Attempt:0,}" Sep 4 00:00:02.511606 containerd[1603]: time="2025-09-04T00:00:02.511541466Z" level=info msg="Container 1294e7f53d9d0afe289e15a7f4bc4c7e3c5766320d0e21d55bb78c541b7c5e1d: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:00:02.537020 containerd[1603]: time="2025-09-04T00:00:02.536522732Z" level=info msg="CreateContainer within sandbox \"9c12c2b21054c293553b55e1e1ce9745e5a9b0b558858a91b14c92529290efd7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1294e7f53d9d0afe289e15a7f4bc4c7e3c5766320d0e21d55bb78c541b7c5e1d\"" Sep 4 00:00:02.538209 containerd[1603]: time="2025-09-04T00:00:02.538035068Z" level=info msg="StartContainer for \"1294e7f53d9d0afe289e15a7f4bc4c7e3c5766320d0e21d55bb78c541b7c5e1d\"" Sep 4 00:00:02.542747 containerd[1603]: time="2025-09-04T00:00:02.542700283Z" level=info msg="connecting to shim 1294e7f53d9d0afe289e15a7f4bc4c7e3c5766320d0e21d55bb78c541b7c5e1d" address="unix:///run/containerd/s/fd24187c8f92e0d783dea64abff0916d6047f6ceb4be61eab732b3fba3e0a75e" protocol=ttrpc version=3 Sep 4 00:00:02.549311 containerd[1603]: time="2025-09-04T00:00:02.549042831Z" level=info msg="connecting to shim 16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c" address="unix:///run/containerd/s/bd670d2beb1b5e24c6a2845a30c5f54145f68a2437547784e511863216f1edc5" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:00:02.586327 systemd[1]: Started cri-containerd-1294e7f53d9d0afe289e15a7f4bc4c7e3c5766320d0e21d55bb78c541b7c5e1d.scope - libcontainer container 1294e7f53d9d0afe289e15a7f4bc4c7e3c5766320d0e21d55bb78c541b7c5e1d. Sep 4 00:00:02.614733 systemd[1]: Started cri-containerd-16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c.scope - libcontainer container 16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c. Sep 4 00:00:02.705325 containerd[1603]: time="2025-09-04T00:00:02.705247484Z" level=info msg="StartContainer for \"1294e7f53d9d0afe289e15a7f4bc4c7e3c5766320d0e21d55bb78c541b7c5e1d\" returns successfully" Sep 4 00:00:02.770102 containerd[1603]: time="2025-09-04T00:00:02.769935247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jrm56,Uid:4d483060-a36a-42d0-821e-31a1ebfed99c,Namespace:kube-system,Attempt:0,} returns sandbox id \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\"" Sep 4 00:00:03.478485 kubelet[2839]: I0904 00:00:03.478281 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-92jpv" podStartSLOduration=2.478251644 podStartE2EDuration="2.478251644s" podCreationTimestamp="2025-09-04 00:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:00:02.756934176 +0000 UTC m=+5.479830521" watchObservedRunningTime="2025-09-04 00:00:03.478251644 +0000 UTC m=+6.201147988" Sep 4 00:00:10.429697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099272007.mount: Deactivated successfully. Sep 4 00:00:13.826303 containerd[1603]: time="2025-09-04T00:00:13.826136296Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:00:13.827703 containerd[1603]: time="2025-09-04T00:00:13.827192150Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 00:00:13.831090 containerd[1603]: time="2025-09-04T00:00:13.830474271Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:00:13.834713 containerd[1603]: time="2025-09-04T00:00:13.834667572Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.354321338s" Sep 4 00:00:13.834976 containerd[1603]: time="2025-09-04T00:00:13.834946036Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 00:00:13.839299 containerd[1603]: time="2025-09-04T00:00:13.839240505Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 00:00:13.846618 containerd[1603]: time="2025-09-04T00:00:13.846485205Z" level=info msg="CreateContainer within sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 00:00:13.869184 containerd[1603]: time="2025-09-04T00:00:13.866818011Z" level=info msg="Container 1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:00:13.878928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount21356289.mount: Deactivated successfully. Sep 4 00:00:13.881890 containerd[1603]: time="2025-09-04T00:00:13.881804142Z" level=info msg="CreateContainer within sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\"" Sep 4 00:00:13.882740 containerd[1603]: time="2025-09-04T00:00:13.882712050Z" level=info msg="StartContainer for \"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\"" Sep 4 00:00:13.884516 containerd[1603]: time="2025-09-04T00:00:13.884432579Z" level=info msg="connecting to shim 1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4" address="unix:///run/containerd/s/a1b4151a781cf1fccc74fcf6efb52d8b0d09b2048050a78441ab6d7df007e7f2" protocol=ttrpc version=3 Sep 4 00:00:13.926416 systemd[1]: Started cri-containerd-1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4.scope - libcontainer container 1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4. Sep 4 00:00:13.989547 containerd[1603]: time="2025-09-04T00:00:13.989475446Z" level=info msg="StartContainer for \"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\" returns successfully" Sep 4 00:00:14.018033 systemd[1]: cri-containerd-1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4.scope: Deactivated successfully. Sep 4 00:00:14.021624 containerd[1603]: time="2025-09-04T00:00:14.021564026Z" level=info msg="received exit event container_id:\"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\" id:\"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\" pid:3261 exited_at:{seconds:1756944014 nanos:21048086}" Sep 4 00:00:14.022272 containerd[1603]: time="2025-09-04T00:00:14.021627705Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\" id:\"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\" pid:3261 exited_at:{seconds:1756944014 nanos:21048086}" Sep 4 00:00:14.864524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4-rootfs.mount: Deactivated successfully. Sep 4 00:00:16.827621 containerd[1603]: time="2025-09-04T00:00:16.827537367Z" level=info msg="CreateContainer within sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 00:00:16.872068 containerd[1603]: time="2025-09-04T00:00:16.871648494Z" level=info msg="Container 8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:00:16.890460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount577559401.mount: Deactivated successfully. Sep 4 00:00:16.909202 containerd[1603]: time="2025-09-04T00:00:16.909134772Z" level=info msg="CreateContainer within sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\"" Sep 4 00:00:16.910113 containerd[1603]: time="2025-09-04T00:00:16.909980947Z" level=info msg="StartContainer for \"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\"" Sep 4 00:00:16.911557 containerd[1603]: time="2025-09-04T00:00:16.911521327Z" level=info msg="connecting to shim 8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35" address="unix:///run/containerd/s/a1b4151a781cf1fccc74fcf6efb52d8b0d09b2048050a78441ab6d7df007e7f2" protocol=ttrpc version=3 Sep 4 00:00:16.954366 systemd[1]: Started cri-containerd-8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35.scope - libcontainer container 8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35. Sep 4 00:00:17.019507 containerd[1603]: time="2025-09-04T00:00:17.019440711Z" level=info msg="StartContainer for \"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\" returns successfully" Sep 4 00:00:17.055829 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 00:00:17.056778 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:00:17.058241 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:00:17.062537 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 00:00:17.068899 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 00:00:17.069373 containerd[1603]: time="2025-09-04T00:00:17.069331923Z" level=info msg="received exit event container_id:\"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\" id:\"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\" pid:3312 exited_at:{seconds:1756944017 nanos:68158881}" Sep 4 00:00:17.070492 systemd[1]: cri-containerd-8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35.scope: Deactivated successfully. Sep 4 00:00:17.071278 containerd[1603]: time="2025-09-04T00:00:17.071242996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\" id:\"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\" pid:3312 exited_at:{seconds:1756944017 nanos:68158881}" Sep 4 00:00:17.113021 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 00:00:17.848885 containerd[1603]: time="2025-09-04T00:00:17.848394856Z" level=info msg="CreateContainer within sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 00:00:17.857650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35-rootfs.mount: Deactivated successfully. Sep 4 00:00:17.924575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425882909.mount: Deactivated successfully. Sep 4 00:00:17.926582 containerd[1603]: time="2025-09-04T00:00:17.925129341Z" level=info msg="Container 3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:00:17.949356 containerd[1603]: time="2025-09-04T00:00:17.949229832Z" level=info msg="CreateContainer within sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\"" Sep 4 00:00:17.950847 containerd[1603]: time="2025-09-04T00:00:17.950792910Z" level=info msg="StartContainer for \"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\"" Sep 4 00:00:17.956080 containerd[1603]: time="2025-09-04T00:00:17.955878444Z" level=info msg="connecting to shim 3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4" address="unix:///run/containerd/s/a1b4151a781cf1fccc74fcf6efb52d8b0d09b2048050a78441ab6d7df007e7f2" protocol=ttrpc version=3 Sep 4 00:00:18.017319 systemd[1]: Started cri-containerd-3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4.scope - libcontainer container 3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4. Sep 4 00:00:18.136590 systemd[1]: cri-containerd-3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4.scope: Deactivated successfully. Sep 4 00:00:18.141668 containerd[1603]: time="2025-09-04T00:00:18.141470639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\" id:\"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\" pid:3371 exited_at:{seconds:1756944018 nanos:137894942}" Sep 4 00:00:18.145215 containerd[1603]: time="2025-09-04T00:00:18.145162083Z" level=info msg="received exit event container_id:\"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\" id:\"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\" pid:3371 exited_at:{seconds:1756944018 nanos:137894942}" Sep 4 00:00:18.181534 containerd[1603]: time="2025-09-04T00:00:18.181471484Z" level=info msg="StartContainer for \"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\" returns successfully" Sep 4 00:00:18.816322 containerd[1603]: time="2025-09-04T00:00:18.816237737Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:00:18.817709 containerd[1603]: time="2025-09-04T00:00:18.817506746Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 00:00:18.819332 containerd[1603]: time="2025-09-04T00:00:18.819269021Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 00:00:18.821822 containerd[1603]: time="2025-09-04T00:00:18.821762653Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.98245028s" Sep 4 00:00:18.822044 containerd[1603]: time="2025-09-04T00:00:18.822013602Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 00:00:18.830782 containerd[1603]: time="2025-09-04T00:00:18.830707103Z" level=info msg="CreateContainer within sandbox \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 00:00:18.849101 containerd[1603]: time="2025-09-04T00:00:18.848919110Z" level=info msg="CreateContainer within sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 00:00:18.854459 containerd[1603]: time="2025-09-04T00:00:18.850358070Z" level=info msg="Container 6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:00:18.863212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4-rootfs.mount: Deactivated successfully. Sep 4 00:00:18.881726 containerd[1603]: time="2025-09-04T00:00:18.881667418Z" level=info msg="CreateContainer within sandbox \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\"" Sep 4 00:00:18.892363 containerd[1603]: time="2025-09-04T00:00:18.892280823Z" level=info msg="StartContainer for \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\"" Sep 4 00:00:18.896284 containerd[1603]: time="2025-09-04T00:00:18.896233854Z" level=info msg="connecting to shim 6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64" address="unix:///run/containerd/s/bd670d2beb1b5e24c6a2845a30c5f54145f68a2437547784e511863216f1edc5" protocol=ttrpc version=3 Sep 4 00:00:18.904401 containerd[1603]: time="2025-09-04T00:00:18.904333814Z" level=info msg="Container ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:00:18.934842 containerd[1603]: time="2025-09-04T00:00:18.934679445Z" level=info msg="CreateContainer within sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\"" Sep 4 00:00:18.939431 containerd[1603]: time="2025-09-04T00:00:18.937305229Z" level=info msg="StartContainer for \"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\"" Sep 4 00:00:18.942742 containerd[1603]: time="2025-09-04T00:00:18.942577329Z" level=info msg="connecting to shim ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9" address="unix:///run/containerd/s/a1b4151a781cf1fccc74fcf6efb52d8b0d09b2048050a78441ab6d7df007e7f2" protocol=ttrpc version=3 Sep 4 00:00:18.948478 systemd[1]: Started cri-containerd-6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64.scope - libcontainer container 6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64. Sep 4 00:00:18.994412 systemd[1]: Started cri-containerd-ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9.scope - libcontainer container ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9. Sep 4 00:00:19.043373 containerd[1603]: time="2025-09-04T00:00:19.043319636Z" level=info msg="StartContainer for \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\" returns successfully" Sep 4 00:00:19.074370 systemd[1]: cri-containerd-ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9.scope: Deactivated successfully. Sep 4 00:00:19.078571 containerd[1603]: time="2025-09-04T00:00:19.078486130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\" id:\"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\" pid:3435 exited_at:{seconds:1756944019 nanos:76799739}" Sep 4 00:00:19.079366 containerd[1603]: time="2025-09-04T00:00:19.079289808Z" level=info msg="received exit event container_id:\"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\" id:\"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\" pid:3435 exited_at:{seconds:1756944019 nanos:76799739}" Sep 4 00:00:19.094797 containerd[1603]: time="2025-09-04T00:00:19.094722928Z" level=info msg="StartContainer for \"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\" returns successfully" Sep 4 00:00:19.128648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9-rootfs.mount: Deactivated successfully. Sep 4 00:00:19.860425 containerd[1603]: time="2025-09-04T00:00:19.860308141Z" level=info msg="CreateContainer within sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 00:00:19.883766 containerd[1603]: time="2025-09-04T00:00:19.882656734Z" level=info msg="Container 5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:00:19.911291 containerd[1603]: time="2025-09-04T00:00:19.911243574Z" level=info msg="CreateContainer within sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\"" Sep 4 00:00:19.913956 containerd[1603]: time="2025-09-04T00:00:19.913766894Z" level=info msg="StartContainer for \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\"" Sep 4 00:00:19.918207 containerd[1603]: time="2025-09-04T00:00:19.918134815Z" level=info msg="connecting to shim 5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944" address="unix:///run/containerd/s/a1b4151a781cf1fccc74fcf6efb52d8b0d09b2048050a78441ab6d7df007e7f2" protocol=ttrpc version=3 Sep 4 00:00:19.987494 systemd[1]: Started cri-containerd-5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944.scope - libcontainer container 5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944. Sep 4 00:00:20.140711 containerd[1603]: time="2025-09-04T00:00:20.140544981Z" level=info msg="StartContainer for \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" returns successfully" Sep 4 00:00:20.381166 containerd[1603]: time="2025-09-04T00:00:20.381040872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" id:\"486b6d00d179f5c7da708919505c6d84718351b864ef94f287a1e6d71e0b6635\" pid:3513 exited_at:{seconds:1756944020 nanos:380325461}" Sep 4 00:00:20.448584 kubelet[2839]: I0904 00:00:20.448419 2839 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 00:00:20.528707 kubelet[2839]: I0904 00:00:20.528619 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jrm56" podStartSLOduration=2.479362421 podStartE2EDuration="18.5285896s" podCreationTimestamp="2025-09-04 00:00:02 +0000 UTC" firstStartedPulling="2025-09-04 00:00:02.774160513 +0000 UTC m=+5.497056849" lastFinishedPulling="2025-09-04 00:00:18.823387694 +0000 UTC m=+21.546284028" observedRunningTime="2025-09-04 00:00:20.03933405 +0000 UTC m=+22.762230395" watchObservedRunningTime="2025-09-04 00:00:20.5285896 +0000 UTC m=+23.251485946" Sep 4 00:00:20.555105 kubelet[2839]: I0904 00:00:20.554423 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stmds\" (UniqueName: \"kubernetes.io/projected/ed6a6eee-f0ab-47d8-ac5c-80e8cc699a71-kube-api-access-stmds\") pod \"coredns-674b8bbfcf-drhpc\" (UID: \"ed6a6eee-f0ab-47d8-ac5c-80e8cc699a71\") " pod="kube-system/coredns-674b8bbfcf-drhpc" Sep 4 00:00:20.555105 kubelet[2839]: I0904 00:00:20.554698 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed6a6eee-f0ab-47d8-ac5c-80e8cc699a71-config-volume\") pod \"coredns-674b8bbfcf-drhpc\" (UID: \"ed6a6eee-f0ab-47d8-ac5c-80e8cc699a71\") " pod="kube-system/coredns-674b8bbfcf-drhpc" Sep 4 00:00:20.569643 systemd[1]: Created slice kubepods-burstable-poded6a6eee_f0ab_47d8_ac5c_80e8cc699a71.slice - libcontainer container kubepods-burstable-poded6a6eee_f0ab_47d8_ac5c_80e8cc699a71.slice. Sep 4 00:00:20.585218 systemd[1]: Created slice kubepods-burstable-podc57871a7_5cac_497c_8dcd_be9da812e949.slice - libcontainer container kubepods-burstable-podc57871a7_5cac_497c_8dcd_be9da812e949.slice. Sep 4 00:00:20.656152 kubelet[2839]: I0904 00:00:20.655047 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndbrg\" (UniqueName: \"kubernetes.io/projected/c57871a7-5cac-497c-8dcd-be9da812e949-kube-api-access-ndbrg\") pod \"coredns-674b8bbfcf-rkq6f\" (UID: \"c57871a7-5cac-497c-8dcd-be9da812e949\") " pod="kube-system/coredns-674b8bbfcf-rkq6f" Sep 4 00:00:20.656424 kubelet[2839]: I0904 00:00:20.656200 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c57871a7-5cac-497c-8dcd-be9da812e949-config-volume\") pod \"coredns-674b8bbfcf-rkq6f\" (UID: \"c57871a7-5cac-497c-8dcd-be9da812e949\") " pod="kube-system/coredns-674b8bbfcf-rkq6f" Sep 4 00:00:20.880239 containerd[1603]: time="2025-09-04T00:00:20.879971358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-drhpc,Uid:ed6a6eee-f0ab-47d8-ac5c-80e8cc699a71,Namespace:kube-system,Attempt:0,}" Sep 4 00:00:20.891287 containerd[1603]: time="2025-09-04T00:00:20.891244777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rkq6f,Uid:c57871a7-5cac-497c-8dcd-be9da812e949,Namespace:kube-system,Attempt:0,}" Sep 4 00:00:20.953200 kubelet[2839]: I0904 00:00:20.950933 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dfz2h" podStartSLOduration=8.591225495 podStartE2EDuration="19.950912095s" podCreationTimestamp="2025-09-04 00:00:01 +0000 UTC" firstStartedPulling="2025-09-04 00:00:02.477986543 +0000 UTC m=+5.200882875" lastFinishedPulling="2025-09-04 00:00:13.837673146 +0000 UTC m=+16.560569475" observedRunningTime="2025-09-04 00:00:20.950297448 +0000 UTC m=+23.673193794" watchObservedRunningTime="2025-09-04 00:00:20.950912095 +0000 UTC m=+23.673808437" Sep 4 00:00:23.073705 systemd-networkd[1457]: cilium_host: Link UP Sep 4 00:00:23.074717 systemd-networkd[1457]: cilium_net: Link UP Sep 4 00:00:23.075836 systemd-networkd[1457]: cilium_net: Gained carrier Sep 4 00:00:23.076608 systemd-networkd[1457]: cilium_host: Gained carrier Sep 4 00:00:23.266641 systemd-networkd[1457]: cilium_vxlan: Link UP Sep 4 00:00:23.266655 systemd-networkd[1457]: cilium_vxlan: Gained carrier Sep 4 00:00:23.387270 systemd-networkd[1457]: cilium_host: Gained IPv6LL Sep 4 00:00:23.573188 kernel: NET: Registered PF_ALG protocol family Sep 4 00:00:23.829150 systemd-networkd[1457]: cilium_net: Gained IPv6LL Sep 4 00:00:24.532820 systemd-networkd[1457]: cilium_vxlan: Gained IPv6LL Sep 4 00:00:24.572501 systemd-networkd[1457]: lxc_health: Link UP Sep 4 00:00:24.585332 systemd-networkd[1457]: lxc_health: Gained carrier Sep 4 00:00:24.968751 systemd-networkd[1457]: lxc1514e67199f8: Link UP Sep 4 00:00:24.983553 kernel: eth0: renamed from tmpcd6d7 Sep 4 00:00:24.993204 systemd-networkd[1457]: lxc1514e67199f8: Gained carrier Sep 4 00:00:25.008826 kernel: eth0: renamed from tmp2847e Sep 4 00:00:25.015324 systemd-networkd[1457]: lxc366f6368af4f: Link UP Sep 4 00:00:25.017574 systemd-networkd[1457]: lxc366f6368af4f: Gained carrier Sep 4 00:00:25.940270 systemd-networkd[1457]: lxc_health: Gained IPv6LL Sep 4 00:00:26.771588 systemd-networkd[1457]: lxc366f6368af4f: Gained IPv6LL Sep 4 00:00:26.835474 systemd-networkd[1457]: lxc1514e67199f8: Gained IPv6LL Sep 4 00:00:29.452004 ntpd[1518]: Listen normally on 8 cilium_host 192.168.0.1:123 Sep 4 00:00:29.453199 ntpd[1518]: 4 Sep 00:00:29 ntpd[1518]: Listen normally on 8 cilium_host 192.168.0.1:123 Sep 4 00:00:29.453199 ntpd[1518]: 4 Sep 00:00:29 ntpd[1518]: Listen normally on 9 cilium_net [fe80::bcb6:71ff:fe3d:92eb%4]:123 Sep 4 00:00:29.453199 ntpd[1518]: 4 Sep 00:00:29 ntpd[1518]: Listen normally on 10 cilium_host [fe80::3a:55ff:fec3:6f7e%5]:123 Sep 4 00:00:29.453199 ntpd[1518]: 4 Sep 00:00:29 ntpd[1518]: Listen normally on 11 cilium_vxlan [fe80::e44f:d7ff:fe03:f004%6]:123 Sep 4 00:00:29.453199 ntpd[1518]: 4 Sep 00:00:29 ntpd[1518]: Listen normally on 12 lxc_health [fe80::2cc4:eff:fe94:18f3%8]:123 Sep 4 00:00:29.453199 ntpd[1518]: 4 Sep 00:00:29 ntpd[1518]: Listen normally on 13 lxc1514e67199f8 [fe80::f8be:5eff:fe8e:bbd0%10]:123 Sep 4 00:00:29.453199 ntpd[1518]: 4 Sep 00:00:29 ntpd[1518]: Listen normally on 14 lxc366f6368af4f [fe80::6ccc:b0ff:fe8b:5c61%12]:123 Sep 4 00:00:29.452762 ntpd[1518]: Listen normally on 9 cilium_net [fe80::bcb6:71ff:fe3d:92eb%4]:123 Sep 4 00:00:29.452850 ntpd[1518]: Listen normally on 10 cilium_host [fe80::3a:55ff:fec3:6f7e%5]:123 Sep 4 00:00:29.452912 ntpd[1518]: Listen normally on 11 cilium_vxlan [fe80::e44f:d7ff:fe03:f004%6]:123 Sep 4 00:00:29.452978 ntpd[1518]: Listen normally on 12 lxc_health [fe80::2cc4:eff:fe94:18f3%8]:123 Sep 4 00:00:29.453033 ntpd[1518]: Listen normally on 13 lxc1514e67199f8 [fe80::f8be:5eff:fe8e:bbd0%10]:123 Sep 4 00:00:29.453117 ntpd[1518]: Listen normally on 14 lxc366f6368af4f [fe80::6ccc:b0ff:fe8b:5c61%12]:123 Sep 4 00:00:30.408178 containerd[1603]: time="2025-09-04T00:00:30.406511299Z" level=info msg="connecting to shim 2847e23d95180ca342fcf4dc802bc579fd283491101fa94a4d781343d6677db1" address="unix:///run/containerd/s/dcda206e9cbd1f8a176f0e96a0631999181ef5e9d2105bb922ab544b02ab4c38" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:00:30.417090 containerd[1603]: time="2025-09-04T00:00:30.416210052Z" level=info msg="connecting to shim cd6d7d6fca096cc5433da679ed7bdd0b6e19358b3f18516617bbf64d690e2592" address="unix:///run/containerd/s/67f704d7354e0e01d511426585cfc44b25d2409bf279aa6c856c47aca460b151" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:00:30.497899 systemd[1]: Started cri-containerd-cd6d7d6fca096cc5433da679ed7bdd0b6e19358b3f18516617bbf64d690e2592.scope - libcontainer container cd6d7d6fca096cc5433da679ed7bdd0b6e19358b3f18516617bbf64d690e2592. Sep 4 00:00:30.512776 systemd[1]: Started cri-containerd-2847e23d95180ca342fcf4dc802bc579fd283491101fa94a4d781343d6677db1.scope - libcontainer container 2847e23d95180ca342fcf4dc802bc579fd283491101fa94a4d781343d6677db1. Sep 4 00:00:30.650563 containerd[1603]: time="2025-09-04T00:00:30.650469084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-drhpc,Uid:ed6a6eee-f0ab-47d8-ac5c-80e8cc699a71,Namespace:kube-system,Attempt:0,} returns sandbox id \"2847e23d95180ca342fcf4dc802bc579fd283491101fa94a4d781343d6677db1\"" Sep 4 00:00:30.664040 containerd[1603]: time="2025-09-04T00:00:30.663234939Z" level=info msg="CreateContainer within sandbox \"2847e23d95180ca342fcf4dc802bc579fd283491101fa94a4d781343d6677db1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:00:30.691160 containerd[1603]: time="2025-09-04T00:00:30.687928461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rkq6f,Uid:c57871a7-5cac-497c-8dcd-be9da812e949,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd6d7d6fca096cc5433da679ed7bdd0b6e19358b3f18516617bbf64d690e2592\"" Sep 4 00:00:30.695160 containerd[1603]: time="2025-09-04T00:00:30.693321601Z" level=info msg="Container e8465e7c4f52fe7326164e5ac7ceb4ca5ca5b01a01ad7b4db8d9759ffbf547d0: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:00:30.699111 containerd[1603]: time="2025-09-04T00:00:30.699014648Z" level=info msg="CreateContainer within sandbox \"cd6d7d6fca096cc5433da679ed7bdd0b6e19358b3f18516617bbf64d690e2592\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 00:00:30.716517 containerd[1603]: time="2025-09-04T00:00:30.716446945Z" level=info msg="Container 54f6cf2b78bbfa717575ff3028f14ca803ce35d351462ebd40b9c1d8f2bd2781: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:00:30.718464 containerd[1603]: time="2025-09-04T00:00:30.718409979Z" level=info msg="CreateContainer within sandbox \"2847e23d95180ca342fcf4dc802bc579fd283491101fa94a4d781343d6677db1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e8465e7c4f52fe7326164e5ac7ceb4ca5ca5b01a01ad7b4db8d9759ffbf547d0\"" Sep 4 00:00:30.719840 containerd[1603]: time="2025-09-04T00:00:30.719769561Z" level=info msg="StartContainer for \"e8465e7c4f52fe7326164e5ac7ceb4ca5ca5b01a01ad7b4db8d9759ffbf547d0\"" Sep 4 00:00:30.723964 containerd[1603]: time="2025-09-04T00:00:30.723918621Z" level=info msg="connecting to shim e8465e7c4f52fe7326164e5ac7ceb4ca5ca5b01a01ad7b4db8d9759ffbf547d0" address="unix:///run/containerd/s/dcda206e9cbd1f8a176f0e96a0631999181ef5e9d2105bb922ab544b02ab4c38" protocol=ttrpc version=3 Sep 4 00:00:30.731736 containerd[1603]: time="2025-09-04T00:00:30.731629056Z" level=info msg="CreateContainer within sandbox \"cd6d7d6fca096cc5433da679ed7bdd0b6e19358b3f18516617bbf64d690e2592\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"54f6cf2b78bbfa717575ff3028f14ca803ce35d351462ebd40b9c1d8f2bd2781\"" Sep 4 00:00:30.734446 containerd[1603]: time="2025-09-04T00:00:30.734394626Z" level=info msg="StartContainer for \"54f6cf2b78bbfa717575ff3028f14ca803ce35d351462ebd40b9c1d8f2bd2781\"" Sep 4 00:00:30.740515 containerd[1603]: time="2025-09-04T00:00:30.740304739Z" level=info msg="connecting to shim 54f6cf2b78bbfa717575ff3028f14ca803ce35d351462ebd40b9c1d8f2bd2781" address="unix:///run/containerd/s/67f704d7354e0e01d511426585cfc44b25d2409bf279aa6c856c47aca460b151" protocol=ttrpc version=3 Sep 4 00:00:30.765555 systemd[1]: Started cri-containerd-e8465e7c4f52fe7326164e5ac7ceb4ca5ca5b01a01ad7b4db8d9759ffbf547d0.scope - libcontainer container e8465e7c4f52fe7326164e5ac7ceb4ca5ca5b01a01ad7b4db8d9759ffbf547d0. Sep 4 00:00:30.782989 systemd[1]: Started cri-containerd-54f6cf2b78bbfa717575ff3028f14ca803ce35d351462ebd40b9c1d8f2bd2781.scope - libcontainer container 54f6cf2b78bbfa717575ff3028f14ca803ce35d351462ebd40b9c1d8f2bd2781. Sep 4 00:00:30.863198 containerd[1603]: time="2025-09-04T00:00:30.863029361Z" level=info msg="StartContainer for \"e8465e7c4f52fe7326164e5ac7ceb4ca5ca5b01a01ad7b4db8d9759ffbf547d0\" returns successfully" Sep 4 00:00:30.884350 containerd[1603]: time="2025-09-04T00:00:30.884271063Z" level=info msg="StartContainer for \"54f6cf2b78bbfa717575ff3028f14ca803ce35d351462ebd40b9c1d8f2bd2781\" returns successfully" Sep 4 00:00:30.951197 kubelet[2839]: I0904 00:00:30.950896 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rkq6f" podStartSLOduration=28.950875455 podStartE2EDuration="28.950875455s" podCreationTimestamp="2025-09-04 00:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:00:30.950697787 +0000 UTC m=+33.673594132" watchObservedRunningTime="2025-09-04 00:00:30.950875455 +0000 UTC m=+33.673771823" Sep 4 00:00:31.387213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1975038022.mount: Deactivated successfully. Sep 4 00:00:31.955032 kubelet[2839]: I0904 00:00:31.954894 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-drhpc" podStartSLOduration=29.954866748 podStartE2EDuration="29.954866748s" podCreationTimestamp="2025-09-04 00:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:00:30.977497241 +0000 UTC m=+33.700393586" watchObservedRunningTime="2025-09-04 00:00:31.954866748 +0000 UTC m=+34.677763095" Sep 4 00:00:53.991879 systemd[1]: Started sshd@9-10.128.0.18:22-147.75.109.163:52286.service - OpenSSH per-connection server daemon (147.75.109.163:52286). Sep 4 00:00:54.328588 sshd[4158]: Accepted publickey for core from 147.75.109.163 port 52286 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:00:54.332376 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:00:54.346637 systemd-logind[1531]: New session 10 of user core. Sep 4 00:00:54.351608 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 00:00:54.652522 sshd[4160]: Connection closed by 147.75.109.163 port 52286 Sep 4 00:00:54.653617 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Sep 4 00:00:54.659642 systemd[1]: sshd@9-10.128.0.18:22-147.75.109.163:52286.service: Deactivated successfully. Sep 4 00:00:54.663376 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 00:00:54.665160 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. Sep 4 00:00:54.667432 systemd-logind[1531]: Removed session 10. Sep 4 00:00:59.714005 systemd[1]: Started sshd@10-10.128.0.18:22-147.75.109.163:52296.service - OpenSSH per-connection server daemon (147.75.109.163:52296). Sep 4 00:01:00.023590 sshd[4176]: Accepted publickey for core from 147.75.109.163 port 52296 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:00.025439 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:00.034344 systemd-logind[1531]: New session 11 of user core. Sep 4 00:01:00.041784 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 00:01:00.348553 sshd[4178]: Connection closed by 147.75.109.163 port 52296 Sep 4 00:01:00.350741 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:00.359517 systemd[1]: sshd@10-10.128.0.18:22-147.75.109.163:52296.service: Deactivated successfully. Sep 4 00:01:00.366473 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 00:01:00.369683 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. Sep 4 00:01:00.372196 systemd-logind[1531]: Removed session 11. Sep 4 00:01:05.413407 systemd[1]: Started sshd@11-10.128.0.18:22-147.75.109.163:35428.service - OpenSSH per-connection server daemon (147.75.109.163:35428). Sep 4 00:01:05.737339 sshd[4195]: Accepted publickey for core from 147.75.109.163 port 35428 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:05.740245 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:05.748513 systemd-logind[1531]: New session 12 of user core. Sep 4 00:01:05.757332 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 00:01:06.062018 sshd[4197]: Connection closed by 147.75.109.163 port 35428 Sep 4 00:01:06.063355 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:06.068968 systemd[1]: sshd@11-10.128.0.18:22-147.75.109.163:35428.service: Deactivated successfully. Sep 4 00:01:06.074864 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 00:01:06.080910 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. Sep 4 00:01:06.083328 systemd-logind[1531]: Removed session 12. Sep 4 00:01:11.121848 systemd[1]: Started sshd@12-10.128.0.18:22-147.75.109.163:38924.service - OpenSSH per-connection server daemon (147.75.109.163:38924). Sep 4 00:01:11.448232 sshd[4210]: Accepted publickey for core from 147.75.109.163 port 38924 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:11.450001 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:11.459146 systemd-logind[1531]: New session 13 of user core. Sep 4 00:01:11.468484 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 00:01:11.789578 sshd[4212]: Connection closed by 147.75.109.163 port 38924 Sep 4 00:01:11.792111 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:11.801239 systemd[1]: sshd@12-10.128.0.18:22-147.75.109.163:38924.service: Deactivated successfully. Sep 4 00:01:11.812901 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 00:01:11.814747 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. Sep 4 00:01:11.820944 systemd-logind[1531]: Removed session 13. Sep 4 00:01:11.853372 systemd[1]: Started sshd@13-10.128.0.18:22-147.75.109.163:38938.service - OpenSSH per-connection server daemon (147.75.109.163:38938). Sep 4 00:01:12.183273 sshd[4225]: Accepted publickey for core from 147.75.109.163 port 38938 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:12.185092 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:12.193569 systemd-logind[1531]: New session 14 of user core. Sep 4 00:01:12.201312 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 00:01:12.602358 sshd[4227]: Connection closed by 147.75.109.163 port 38938 Sep 4 00:01:12.605990 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:12.624493 systemd[1]: sshd@13-10.128.0.18:22-147.75.109.163:38938.service: Deactivated successfully. Sep 4 00:01:12.636910 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 00:01:12.640348 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. Sep 4 00:01:12.666548 systemd-logind[1531]: Removed session 14. Sep 4 00:01:12.668998 systemd[1]: Started sshd@14-10.128.0.18:22-147.75.109.163:38946.service - OpenSSH per-connection server daemon (147.75.109.163:38946). Sep 4 00:01:12.993189 sshd[4237]: Accepted publickey for core from 147.75.109.163 port 38946 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:12.993980 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:13.006130 systemd-logind[1531]: New session 15 of user core. Sep 4 00:01:13.023597 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 00:01:13.311778 sshd[4239]: Connection closed by 147.75.109.163 port 38946 Sep 4 00:01:13.313292 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:13.324373 systemd[1]: sshd@14-10.128.0.18:22-147.75.109.163:38946.service: Deactivated successfully. Sep 4 00:01:13.330068 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 00:01:13.333750 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. Sep 4 00:01:13.336121 systemd-logind[1531]: Removed session 15. Sep 4 00:01:18.367895 systemd[1]: Started sshd@15-10.128.0.18:22-147.75.109.163:38958.service - OpenSSH per-connection server daemon (147.75.109.163:38958). Sep 4 00:01:18.695722 sshd[4250]: Accepted publickey for core from 147.75.109.163 port 38958 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:18.698366 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:18.707124 systemd-logind[1531]: New session 16 of user core. Sep 4 00:01:18.712430 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 00:01:19.007384 sshd[4252]: Connection closed by 147.75.109.163 port 38958 Sep 4 00:01:19.008568 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:19.015457 systemd[1]: sshd@15-10.128.0.18:22-147.75.109.163:38958.service: Deactivated successfully. Sep 4 00:01:19.020452 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 00:01:19.023771 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. Sep 4 00:01:19.026729 systemd-logind[1531]: Removed session 16. Sep 4 00:01:24.063883 systemd[1]: Started sshd@16-10.128.0.18:22-147.75.109.163:42738.service - OpenSSH per-connection server daemon (147.75.109.163:42738). Sep 4 00:01:24.375241 sshd[4266]: Accepted publickey for core from 147.75.109.163 port 42738 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:24.377516 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:24.386121 systemd-logind[1531]: New session 17 of user core. Sep 4 00:01:24.391326 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 00:01:24.692035 sshd[4268]: Connection closed by 147.75.109.163 port 42738 Sep 4 00:01:24.693679 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:24.700671 systemd[1]: sshd@16-10.128.0.18:22-147.75.109.163:42738.service: Deactivated successfully. Sep 4 00:01:24.704116 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 00:01:24.706211 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. Sep 4 00:01:24.709298 systemd-logind[1531]: Removed session 17. Sep 4 00:01:29.748740 systemd[1]: Started sshd@17-10.128.0.18:22-147.75.109.163:42742.service - OpenSSH per-connection server daemon (147.75.109.163:42742). Sep 4 00:01:30.066777 sshd[4280]: Accepted publickey for core from 147.75.109.163 port 42742 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:30.068738 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:30.076397 systemd-logind[1531]: New session 18 of user core. Sep 4 00:01:30.085398 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 00:01:30.391997 sshd[4282]: Connection closed by 147.75.109.163 port 42742 Sep 4 00:01:30.392974 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:30.399362 systemd[1]: sshd@17-10.128.0.18:22-147.75.109.163:42742.service: Deactivated successfully. Sep 4 00:01:30.402766 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 00:01:30.405223 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. Sep 4 00:01:30.407783 systemd-logind[1531]: Removed session 18. Sep 4 00:01:30.448409 systemd[1]: Started sshd@18-10.128.0.18:22-147.75.109.163:43226.service - OpenSSH per-connection server daemon (147.75.109.163:43226). Sep 4 00:01:30.770156 sshd[4294]: Accepted publickey for core from 147.75.109.163 port 43226 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:30.772925 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:30.783487 systemd-logind[1531]: New session 19 of user core. Sep 4 00:01:30.795347 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 00:01:31.143326 sshd[4296]: Connection closed by 147.75.109.163 port 43226 Sep 4 00:01:31.144304 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:31.151980 systemd[1]: sshd@18-10.128.0.18:22-147.75.109.163:43226.service: Deactivated successfully. Sep 4 00:01:31.156361 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 00:01:31.157970 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. Sep 4 00:01:31.161680 systemd-logind[1531]: Removed session 19. Sep 4 00:01:31.207709 systemd[1]: Started sshd@19-10.128.0.18:22-147.75.109.163:43242.service - OpenSSH per-connection server daemon (147.75.109.163:43242). Sep 4 00:01:31.524346 sshd[4305]: Accepted publickey for core from 147.75.109.163 port 43242 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:31.527227 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:31.537193 systemd-logind[1531]: New session 20 of user core. Sep 4 00:01:31.546586 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 00:01:32.502532 sshd[4307]: Connection closed by 147.75.109.163 port 43242 Sep 4 00:01:32.502381 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:32.513838 systemd[1]: sshd@19-10.128.0.18:22-147.75.109.163:43242.service: Deactivated successfully. Sep 4 00:01:32.519800 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 00:01:32.523352 systemd-logind[1531]: Session 20 logged out. Waiting for processes to exit. Sep 4 00:01:32.527108 systemd-logind[1531]: Removed session 20. Sep 4 00:01:32.558926 systemd[1]: Started sshd@20-10.128.0.18:22-147.75.109.163:43244.service - OpenSSH per-connection server daemon (147.75.109.163:43244). Sep 4 00:01:32.886653 sshd[4324]: Accepted publickey for core from 147.75.109.163 port 43244 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:32.888759 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:32.895138 systemd-logind[1531]: New session 21 of user core. Sep 4 00:01:32.904373 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 00:01:33.353015 sshd[4326]: Connection closed by 147.75.109.163 port 43244 Sep 4 00:01:33.354001 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:33.361678 systemd[1]: sshd@20-10.128.0.18:22-147.75.109.163:43244.service: Deactivated successfully. Sep 4 00:01:33.367306 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 00:01:33.368954 systemd-logind[1531]: Session 21 logged out. Waiting for processes to exit. Sep 4 00:01:33.372251 systemd-logind[1531]: Removed session 21. Sep 4 00:01:33.412391 systemd[1]: Started sshd@21-10.128.0.18:22-147.75.109.163:43252.service - OpenSSH per-connection server daemon (147.75.109.163:43252). Sep 4 00:01:33.728759 sshd[4338]: Accepted publickey for core from 147.75.109.163 port 43252 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:33.731494 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:33.741704 systemd-logind[1531]: New session 22 of user core. Sep 4 00:01:33.747361 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 00:01:34.050339 sshd[4340]: Connection closed by 147.75.109.163 port 43252 Sep 4 00:01:34.051729 sshd-session[4338]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:34.058597 systemd[1]: sshd@21-10.128.0.18:22-147.75.109.163:43252.service: Deactivated successfully. Sep 4 00:01:34.062621 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 00:01:34.066451 systemd-logind[1531]: Session 22 logged out. Waiting for processes to exit. Sep 4 00:01:34.069923 systemd-logind[1531]: Removed session 22. Sep 4 00:01:39.108707 systemd[1]: Started sshd@22-10.128.0.18:22-147.75.109.163:43268.service - OpenSSH per-connection server daemon (147.75.109.163:43268). Sep 4 00:01:39.438909 sshd[4354]: Accepted publickey for core from 147.75.109.163 port 43268 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:39.440881 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:39.450140 systemd-logind[1531]: New session 23 of user core. Sep 4 00:01:39.458610 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 00:01:39.760933 sshd[4356]: Connection closed by 147.75.109.163 port 43268 Sep 4 00:01:39.762324 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:39.767810 systemd[1]: sshd@22-10.128.0.18:22-147.75.109.163:43268.service: Deactivated successfully. Sep 4 00:01:39.770992 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 00:01:39.775040 systemd-logind[1531]: Session 23 logged out. Waiting for processes to exit. Sep 4 00:01:39.779175 systemd-logind[1531]: Removed session 23. Sep 4 00:01:44.827551 systemd[1]: Started sshd@23-10.128.0.18:22-147.75.109.163:55922.service - OpenSSH per-connection server daemon (147.75.109.163:55922). Sep 4 00:01:45.148551 sshd[4370]: Accepted publickey for core from 147.75.109.163 port 55922 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:45.150903 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:45.161871 systemd-logind[1531]: New session 24 of user core. Sep 4 00:01:45.168447 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 00:01:45.456843 sshd[4372]: Connection closed by 147.75.109.163 port 55922 Sep 4 00:01:45.460617 sshd-session[4370]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:45.472717 systemd[1]: sshd@23-10.128.0.18:22-147.75.109.163:55922.service: Deactivated successfully. Sep 4 00:01:45.477694 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 00:01:45.483385 systemd-logind[1531]: Session 24 logged out. Waiting for processes to exit. Sep 4 00:01:45.487890 systemd-logind[1531]: Removed session 24. Sep 4 00:01:50.520469 systemd[1]: Started sshd@24-10.128.0.18:22-147.75.109.163:47994.service - OpenSSH per-connection server daemon (147.75.109.163:47994). Sep 4 00:01:50.835603 sshd[4383]: Accepted publickey for core from 147.75.109.163 port 47994 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:50.837624 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:50.846374 systemd-logind[1531]: New session 25 of user core. Sep 4 00:01:50.851517 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 00:01:51.145989 sshd[4385]: Connection closed by 147.75.109.163 port 47994 Sep 4 00:01:51.146935 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:51.156696 systemd-logind[1531]: Session 25 logged out. Waiting for processes to exit. Sep 4 00:01:51.157534 systemd[1]: sshd@24-10.128.0.18:22-147.75.109.163:47994.service: Deactivated successfully. Sep 4 00:01:51.163355 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 00:01:51.166887 systemd-logind[1531]: Removed session 25. Sep 4 00:01:51.203675 systemd[1]: Started sshd@25-10.128.0.18:22-147.75.109.163:48010.service - OpenSSH per-connection server daemon (147.75.109.163:48010). Sep 4 00:01:51.523518 sshd[4398]: Accepted publickey for core from 147.75.109.163 port 48010 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:51.525977 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:51.536184 systemd-logind[1531]: New session 26 of user core. Sep 4 00:01:51.541475 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 00:01:53.335561 containerd[1603]: time="2025-09-04T00:01:53.335373325Z" level=info msg="StopContainer for \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\" with timeout 30 (s)" Sep 4 00:01:53.338319 containerd[1603]: time="2025-09-04T00:01:53.338272428Z" level=info msg="Stop container \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\" with signal terminated" Sep 4 00:01:53.365260 systemd[1]: cri-containerd-6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64.scope: Deactivated successfully. Sep 4 00:01:53.376107 containerd[1603]: time="2025-09-04T00:01:53.376007968Z" level=info msg="received exit event container_id:\"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\" id:\"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\" pid:3417 exited_at:{seconds:1756944113 nanos:375532828}" Sep 4 00:01:53.377512 containerd[1603]: time="2025-09-04T00:01:53.377401989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\" id:\"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\" pid:3417 exited_at:{seconds:1756944113 nanos:375532828}" Sep 4 00:01:53.379744 containerd[1603]: time="2025-09-04T00:01:53.379686426Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 00:01:53.390872 containerd[1603]: time="2025-09-04T00:01:53.390175524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" id:\"7142a04a8113a2adba30e7251a4ee18e0140500763612ac4c06c4a77412a4ad9\" pid:4427 exited_at:{seconds:1756944113 nanos:388665126}" Sep 4 00:01:53.395962 containerd[1603]: time="2025-09-04T00:01:53.395719626Z" level=info msg="StopContainer for \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" with timeout 2 (s)" Sep 4 00:01:53.396429 containerd[1603]: time="2025-09-04T00:01:53.396398977Z" level=info msg="Stop container \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" with signal terminated" Sep 4 00:01:53.422092 systemd-networkd[1457]: lxc_health: Link DOWN Sep 4 00:01:53.422106 systemd-networkd[1457]: lxc_health: Lost carrier Sep 4 00:01:53.456177 containerd[1603]: time="2025-09-04T00:01:53.455360361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" id:\"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" pid:3486 exited_at:{seconds:1756944113 nanos:454674486}" Sep 4 00:01:53.456177 containerd[1603]: time="2025-09-04T00:01:53.455784019Z" level=info msg="received exit event container_id:\"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" id:\"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" pid:3486 exited_at:{seconds:1756944113 nanos:454674486}" Sep 4 00:01:53.461813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64-rootfs.mount: Deactivated successfully. Sep 4 00:01:53.466178 systemd[1]: cri-containerd-5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944.scope: Deactivated successfully. Sep 4 00:01:53.466807 systemd[1]: cri-containerd-5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944.scope: Consumed 10.208s CPU time, 125.6M memory peak, 144K read from disk, 13.3M written to disk. Sep 4 00:01:53.519891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944-rootfs.mount: Deactivated successfully. Sep 4 00:01:53.523885 containerd[1603]: time="2025-09-04T00:01:53.522205547Z" level=info msg="StopContainer for \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\" returns successfully" Sep 4 00:01:53.526273 containerd[1603]: time="2025-09-04T00:01:53.525473295Z" level=info msg="StopPodSandbox for \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\"" Sep 4 00:01:53.526273 containerd[1603]: time="2025-09-04T00:01:53.525592389Z" level=info msg="Container to stop \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:01:53.545987 systemd[1]: cri-containerd-16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c.scope: Deactivated successfully. Sep 4 00:01:53.547441 containerd[1603]: time="2025-09-04T00:01:53.546967588Z" level=info msg="StopContainer for \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" returns successfully" Sep 4 00:01:53.548435 containerd[1603]: time="2025-09-04T00:01:53.548147859Z" level=info msg="StopPodSandbox for \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\"" Sep 4 00:01:53.548435 containerd[1603]: time="2025-09-04T00:01:53.548236359Z" level=info msg="Container to stop \"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:01:53.548435 containerd[1603]: time="2025-09-04T00:01:53.548256256Z" level=info msg="Container to stop \"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:01:53.548435 containerd[1603]: time="2025-09-04T00:01:53.548275445Z" level=info msg="Container to stop \"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:01:53.548435 containerd[1603]: time="2025-09-04T00:01:53.548291652Z" level=info msg="Container to stop \"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:01:53.548435 containerd[1603]: time="2025-09-04T00:01:53.548306752Z" level=info msg="Container to stop \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 00:01:53.555933 containerd[1603]: time="2025-09-04T00:01:53.555842168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\" id:\"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\" pid:3057 exit_status:137 exited_at:{seconds:1756944113 nanos:555376621}" Sep 4 00:01:53.575318 systemd[1]: cri-containerd-239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae.scope: Deactivated successfully. Sep 4 00:01:53.635204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c-rootfs.mount: Deactivated successfully. Sep 4 00:01:53.647305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae-rootfs.mount: Deactivated successfully. Sep 4 00:01:53.648237 containerd[1603]: time="2025-09-04T00:01:53.648099508Z" level=info msg="shim disconnected" id=239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae namespace=k8s.io Sep 4 00:01:53.648362 containerd[1603]: time="2025-09-04T00:01:53.648245930Z" level=warning msg="cleaning up after shim disconnected" id=239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae namespace=k8s.io Sep 4 00:01:53.648362 containerd[1603]: time="2025-09-04T00:01:53.648263001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 00:01:53.656641 containerd[1603]: time="2025-09-04T00:01:53.656587709Z" level=info msg="shim disconnected" id=16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c namespace=k8s.io Sep 4 00:01:53.656641 containerd[1603]: time="2025-09-04T00:01:53.656636068Z" level=warning msg="cleaning up after shim disconnected" id=16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c namespace=k8s.io Sep 4 00:01:53.657173 containerd[1603]: time="2025-09-04T00:01:53.656648949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 00:01:53.664204 containerd[1603]: time="2025-09-04T00:01:53.658180808Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/bd670d2beb1b5e24c6a2845a30c5f54145f68a2437547784e511863216f1edc5->@: write: broken pipe" runtime=io.containerd.runc.v2 Sep 4 00:01:53.692335 containerd[1603]: time="2025-09-04T00:01:53.692267320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" id:\"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" pid:2989 exit_status:137 exited_at:{seconds:1756944113 nanos:583346583}" Sep 4 00:01:53.694174 containerd[1603]: time="2025-09-04T00:01:53.694094924Z" level=info msg="received exit event sandbox_id:\"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" exit_status:137 exited_at:{seconds:1756944113 nanos:583346583}" Sep 4 00:01:53.695739 containerd[1603]: time="2025-09-04T00:01:53.694211398Z" level=info msg="TearDown network for sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" successfully" Sep 4 00:01:53.695739 containerd[1603]: time="2025-09-04T00:01:53.694236269Z" level=info msg="StopPodSandbox for \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" returns successfully" Sep 4 00:01:53.698467 containerd[1603]: time="2025-09-04T00:01:53.698167173Z" level=info msg="TearDown network for sandbox \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\" successfully" Sep 4 00:01:53.698467 containerd[1603]: time="2025-09-04T00:01:53.698375030Z" level=info msg="StopPodSandbox for \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\" returns successfully" Sep 4 00:01:53.699889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c-shm.mount: Deactivated successfully. Sep 4 00:01:53.700988 containerd[1603]: time="2025-09-04T00:01:53.700948158Z" level=info msg="received exit event sandbox_id:\"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\" exit_status:137 exited_at:{seconds:1756944113 nanos:555376621}" Sep 4 00:01:53.790870 kubelet[2839]: I0904 00:01:53.790753 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e4318e3-c789-4c2e-885d-5a3aca4657bc-hubble-tls\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.791535 kubelet[2839]: I0904 00:01:53.790888 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-bpf-maps\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.791535 kubelet[2839]: I0904 00:01:53.790921 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-xtables-lock\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.791535 kubelet[2839]: I0904 00:01:53.790960 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-config-path\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.791535 kubelet[2839]: I0904 00:01:53.790984 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-hostproc\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.791535 kubelet[2839]: I0904 00:01:53.791015 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d483060-a36a-42d0-821e-31a1ebfed99c-cilium-config-path\") pod \"4d483060-a36a-42d0-821e-31a1ebfed99c\" (UID: \"4d483060-a36a-42d0-821e-31a1ebfed99c\") " Sep 4 00:01:53.791535 kubelet[2839]: I0904 00:01:53.791046 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-lib-modules\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.791934 kubelet[2839]: I0904 00:01:53.791135 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrzwb\" (UniqueName: \"kubernetes.io/projected/4d483060-a36a-42d0-821e-31a1ebfed99c-kube-api-access-wrzwb\") pod \"4d483060-a36a-42d0-821e-31a1ebfed99c\" (UID: \"4d483060-a36a-42d0-821e-31a1ebfed99c\") " Sep 4 00:01:53.791934 kubelet[2839]: I0904 00:01:53.791163 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-host-proc-sys-kernel\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.791934 kubelet[2839]: I0904 00:01:53.791188 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-cgroup\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.791934 kubelet[2839]: I0904 00:01:53.791218 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-etc-cni-netd\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.791934 kubelet[2839]: I0904 00:01:53.791246 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cni-path\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.791934 kubelet[2839]: I0904 00:01:53.791281 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e4318e3-c789-4c2e-885d-5a3aca4657bc-clustermesh-secrets\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.793368 kubelet[2839]: I0904 00:01:53.791310 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9whdx\" (UniqueName: \"kubernetes.io/projected/3e4318e3-c789-4c2e-885d-5a3aca4657bc-kube-api-access-9whdx\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.793368 kubelet[2839]: I0904 00:01:53.791340 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-run\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.793368 kubelet[2839]: I0904 00:01:53.791367 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-host-proc-sys-net\") pod \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\" (UID: \"3e4318e3-c789-4c2e-885d-5a3aca4657bc\") " Sep 4 00:01:53.793368 kubelet[2839]: I0904 00:01:53.791497 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:01:53.793368 kubelet[2839]: I0904 00:01:53.792796 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:01:53.793638 kubelet[2839]: I0904 00:01:53.792878 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:01:53.793638 kubelet[2839]: I0904 00:01:53.792890 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:01:53.793638 kubelet[2839]: I0904 00:01:53.792907 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:01:53.793638 kubelet[2839]: I0904 00:01:53.792934 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:01:53.793638 kubelet[2839]: I0904 00:01:53.793098 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-hostproc" (OuterVolumeSpecName: "hostproc") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:01:53.795745 kubelet[2839]: I0904 00:01:53.795680 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:01:53.795973 kubelet[2839]: I0904 00:01:53.795861 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cni-path" (OuterVolumeSpecName: "cni-path") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:01:53.799033 kubelet[2839]: I0904 00:01:53.798019 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 00:01:53.804659 kubelet[2839]: I0904 00:01:53.804581 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d483060-a36a-42d0-821e-31a1ebfed99c-kube-api-access-wrzwb" (OuterVolumeSpecName: "kube-api-access-wrzwb") pod "4d483060-a36a-42d0-821e-31a1ebfed99c" (UID: "4d483060-a36a-42d0-821e-31a1ebfed99c"). InnerVolumeSpecName "kube-api-access-wrzwb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:01:53.807264 kubelet[2839]: I0904 00:01:53.807227 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4318e3-c789-4c2e-885d-5a3aca4657bc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:01:53.808133 kubelet[2839]: I0904 00:01:53.808041 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4318e3-c789-4c2e-885d-5a3aca4657bc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 00:01:53.809255 kubelet[2839]: I0904 00:01:53.809224 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d483060-a36a-42d0-821e-31a1ebfed99c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d483060-a36a-42d0-821e-31a1ebfed99c" (UID: "4d483060-a36a-42d0-821e-31a1ebfed99c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 00:01:53.809459 kubelet[2839]: I0904 00:01:53.809424 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 00:01:53.809861 kubelet[2839]: I0904 00:01:53.809658 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4318e3-c789-4c2e-885d-5a3aca4657bc-kube-api-access-9whdx" (OuterVolumeSpecName: "kube-api-access-9whdx") pod "3e4318e3-c789-4c2e-885d-5a3aca4657bc" (UID: "3e4318e3-c789-4c2e-885d-5a3aca4657bc"). InnerVolumeSpecName "kube-api-access-9whdx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 00:01:53.892401 kubelet[2839]: I0904 00:01:53.892161 2839 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wrzwb\" (UniqueName: \"kubernetes.io/projected/4d483060-a36a-42d0-821e-31a1ebfed99c-kube-api-access-wrzwb\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.892401 kubelet[2839]: I0904 00:01:53.892245 2839 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-host-proc-sys-kernel\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.892401 kubelet[2839]: I0904 00:01:53.892267 2839 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-cgroup\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.892401 kubelet[2839]: I0904 00:01:53.892287 2839 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-etc-cni-netd\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.892401 kubelet[2839]: I0904 00:01:53.892305 2839 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cni-path\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.892401 kubelet[2839]: I0904 00:01:53.892327 2839 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e4318e3-c789-4c2e-885d-5a3aca4657bc-clustermesh-secrets\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.892401 kubelet[2839]: I0904 00:01:53.892345 2839 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9whdx\" (UniqueName: \"kubernetes.io/projected/3e4318e3-c789-4c2e-885d-5a3aca4657bc-kube-api-access-9whdx\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.893277 kubelet[2839]: I0904 00:01:53.892410 2839 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-run\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.893277 kubelet[2839]: I0904 00:01:53.892430 2839 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-host-proc-sys-net\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.893277 kubelet[2839]: I0904 00:01:53.892448 2839 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e4318e3-c789-4c2e-885d-5a3aca4657bc-hubble-tls\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.893277 kubelet[2839]: I0904 00:01:53.892467 2839 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-bpf-maps\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.893277 kubelet[2839]: I0904 00:01:53.892489 2839 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-xtables-lock\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.893277 kubelet[2839]: I0904 00:01:53.892512 2839 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e4318e3-c789-4c2e-885d-5a3aca4657bc-cilium-config-path\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.893277 kubelet[2839]: I0904 00:01:53.892544 2839 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-hostproc\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.893683 kubelet[2839]: I0904 00:01:53.892565 2839 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d483060-a36a-42d0-821e-31a1ebfed99c-cilium-config-path\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:53.893683 kubelet[2839]: I0904 00:01:53.892587 2839 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e4318e3-c789-4c2e-885d-5a3aca4657bc-lib-modules\") on node \"ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436\" DevicePath \"\"" Sep 4 00:01:54.172130 kubelet[2839]: I0904 00:01:54.171827 2839 scope.go:117] "RemoveContainer" containerID="5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944" Sep 4 00:01:54.182146 containerd[1603]: time="2025-09-04T00:01:54.179360768Z" level=info msg="RemoveContainer for \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\"" Sep 4 00:01:54.186602 systemd[1]: Removed slice kubepods-burstable-pod3e4318e3_c789_4c2e_885d_5a3aca4657bc.slice - libcontainer container kubepods-burstable-pod3e4318e3_c789_4c2e_885d_5a3aca4657bc.slice. Sep 4 00:01:54.186854 systemd[1]: kubepods-burstable-pod3e4318e3_c789_4c2e_885d_5a3aca4657bc.slice: Consumed 10.389s CPU time, 126.1M memory peak, 144K read from disk, 13.3M written to disk. Sep 4 00:01:54.201741 systemd[1]: Removed slice kubepods-besteffort-pod4d483060_a36a_42d0_821e_31a1ebfed99c.slice - libcontainer container kubepods-besteffort-pod4d483060_a36a_42d0_821e_31a1ebfed99c.slice. Sep 4 00:01:54.202840 containerd[1603]: time="2025-09-04T00:01:54.201838917Z" level=info msg="RemoveContainer for \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" returns successfully" Sep 4 00:01:54.206434 kubelet[2839]: I0904 00:01:54.206393 2839 scope.go:117] "RemoveContainer" containerID="ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9" Sep 4 00:01:54.213376 containerd[1603]: time="2025-09-04T00:01:54.213240664Z" level=info msg="RemoveContainer for \"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\"" Sep 4 00:01:54.226436 containerd[1603]: time="2025-09-04T00:01:54.226389301Z" level=info msg="RemoveContainer for \"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\" returns successfully" Sep 4 00:01:54.230986 kubelet[2839]: I0904 00:01:54.230439 2839 scope.go:117] "RemoveContainer" containerID="3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4" Sep 4 00:01:54.239700 containerd[1603]: time="2025-09-04T00:01:54.239601114Z" level=info msg="RemoveContainer for \"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\"" Sep 4 00:01:54.251912 containerd[1603]: time="2025-09-04T00:01:54.251796608Z" level=info msg="RemoveContainer for \"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\" returns successfully" Sep 4 00:01:54.252328 kubelet[2839]: I0904 00:01:54.252209 2839 scope.go:117] "RemoveContainer" containerID="8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35" Sep 4 00:01:54.255184 containerd[1603]: time="2025-09-04T00:01:54.254489607Z" level=info msg="RemoveContainer for \"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\"" Sep 4 00:01:54.260357 containerd[1603]: time="2025-09-04T00:01:54.260292453Z" level=info msg="RemoveContainer for \"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\" returns successfully" Sep 4 00:01:54.260813 kubelet[2839]: I0904 00:01:54.260772 2839 scope.go:117] "RemoveContainer" containerID="1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4" Sep 4 00:01:54.263222 containerd[1603]: time="2025-09-04T00:01:54.263172770Z" level=info msg="RemoveContainer for \"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\"" Sep 4 00:01:54.270751 containerd[1603]: time="2025-09-04T00:01:54.270682401Z" level=info msg="RemoveContainer for \"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\" returns successfully" Sep 4 00:01:54.271024 kubelet[2839]: I0904 00:01:54.270985 2839 scope.go:117] "RemoveContainer" containerID="5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944" Sep 4 00:01:54.271478 containerd[1603]: time="2025-09-04T00:01:54.271417246Z" level=error msg="ContainerStatus for \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\": not found" Sep 4 00:01:54.272002 kubelet[2839]: E0904 00:01:54.271703 2839 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\": not found" containerID="5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944" Sep 4 00:01:54.272002 kubelet[2839]: I0904 00:01:54.271754 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944"} err="failed to get container status \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ed730364a1f94d6aa8f58b55ebc4991444871007c67c45ee907d9a236a13944\": not found" Sep 4 00:01:54.272002 kubelet[2839]: I0904 00:01:54.271829 2839 scope.go:117] "RemoveContainer" containerID="ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9" Sep 4 00:01:54.272697 kubelet[2839]: E0904 00:01:54.272288 2839 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\": not found" containerID="ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9" Sep 4 00:01:54.272697 kubelet[2839]: I0904 00:01:54.272319 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9"} err="failed to get container status \"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\": not found" Sep 4 00:01:54.272697 kubelet[2839]: I0904 00:01:54.272352 2839 scope.go:117] "RemoveContainer" containerID="3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4" Sep 4 00:01:54.273231 containerd[1603]: time="2025-09-04T00:01:54.272094546Z" level=error msg="ContainerStatus for \"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad1838f1f0eae69dd29a57d975a65676735195da83a15a388586d40c388950b9\": not found" Sep 4 00:01:54.273231 containerd[1603]: time="2025-09-04T00:01:54.272752779Z" level=error msg="ContainerStatus for \"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\": not found" Sep 4 00:01:54.273231 containerd[1603]: time="2025-09-04T00:01:54.273194304Z" level=error msg="ContainerStatus for \"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\": not found" Sep 4 00:01:54.273655 kubelet[2839]: E0904 00:01:54.272894 2839 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\": not found" containerID="3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4" Sep 4 00:01:54.273655 kubelet[2839]: I0904 00:01:54.272923 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4"} err="failed to get container status \"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b360bad8ae6d4e1eff442e29a9c9ca0d1dec75c1c0b091cab8f638cfecc47f4\": not found" Sep 4 00:01:54.273655 kubelet[2839]: I0904 00:01:54.272949 2839 scope.go:117] "RemoveContainer" containerID="8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35" Sep 4 00:01:54.273655 kubelet[2839]: E0904 00:01:54.273446 2839 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\": not found" containerID="8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35" Sep 4 00:01:54.273655 kubelet[2839]: I0904 00:01:54.273481 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35"} err="failed to get container status \"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a15c3a5f476baf11d2cfdad7d38bd1abc9b9f5005e135170f16188659202f35\": not found" Sep 4 00:01:54.273655 kubelet[2839]: I0904 00:01:54.273516 2839 scope.go:117] "RemoveContainer" containerID="1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4" Sep 4 00:01:54.275002 kubelet[2839]: E0904 00:01:54.274175 2839 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\": not found" containerID="1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4" Sep 4 00:01:54.275002 kubelet[2839]: I0904 00:01:54.274232 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4"} err="failed to get container status \"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\": not found" Sep 4 00:01:54.275002 kubelet[2839]: I0904 00:01:54.274270 2839 scope.go:117] "RemoveContainer" containerID="6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64" Sep 4 00:01:54.275862 containerd[1603]: time="2025-09-04T00:01:54.273722064Z" level=error msg="ContainerStatus for \"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ae6cba2d124d8c81dc950201085cd75c48b97ae858f362a8b3fb924ff32d0c4\": not found" Sep 4 00:01:54.279181 containerd[1603]: time="2025-09-04T00:01:54.279131955Z" level=info msg="RemoveContainer for \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\"" Sep 4 00:01:54.285942 containerd[1603]: time="2025-09-04T00:01:54.285879382Z" level=info msg="RemoveContainer for \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\" returns successfully" Sep 4 00:01:54.286207 kubelet[2839]: I0904 00:01:54.286167 2839 scope.go:117] "RemoveContainer" containerID="6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64" Sep 4 00:01:54.286633 containerd[1603]: time="2025-09-04T00:01:54.286575585Z" level=error msg="ContainerStatus for \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\": not found" Sep 4 00:01:54.286982 kubelet[2839]: E0904 00:01:54.286935 2839 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\": not found" containerID="6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64" Sep 4 00:01:54.287079 kubelet[2839]: I0904 00:01:54.286983 2839 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64"} err="failed to get container status \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bae32b3c51cf885f02a9b4227cbadced0b60f6f7ed75c99ac5980ea96865b64\": not found" Sep 4 00:01:54.454844 systemd[1]: var-lib-kubelet-pods-4d483060\x2da36a\x2d42d0\x2d821e\x2d31a1ebfed99c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwrzwb.mount: Deactivated successfully. Sep 4 00:01:54.456931 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae-shm.mount: Deactivated successfully. Sep 4 00:01:54.457607 systemd[1]: var-lib-kubelet-pods-3e4318e3\x2dc789\x2d4c2e\x2d885d\x2d5a3aca4657bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9whdx.mount: Deactivated successfully. Sep 4 00:01:54.457804 systemd[1]: var-lib-kubelet-pods-3e4318e3\x2dc789\x2d4c2e\x2d885d\x2d5a3aca4657bc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 00:01:54.457914 systemd[1]: var-lib-kubelet-pods-3e4318e3\x2dc789\x2d4c2e\x2d885d\x2d5a3aca4657bc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 00:01:55.277835 containerd[1603]: time="2025-09-04T00:01:55.277761294Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1756944113 nanos:555376621}" Sep 4 00:01:55.289853 sshd[4402]: Connection closed by 147.75.109.163 port 48010 Sep 4 00:01:55.290989 sshd-session[4398]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:55.296289 systemd[1]: sshd@25-10.128.0.18:22-147.75.109.163:48010.service: Deactivated successfully. Sep 4 00:01:55.301850 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 00:01:55.302438 systemd[1]: session-26.scope: Consumed 1.006s CPU time, 22.5M memory peak. Sep 4 00:01:55.305279 systemd-logind[1531]: Session 26 logged out. Waiting for processes to exit. Sep 4 00:01:55.308527 systemd-logind[1531]: Removed session 26. Sep 4 00:01:55.349206 systemd[1]: Started sshd@26-10.128.0.18:22-147.75.109.163:48026.service - OpenSSH per-connection server daemon (147.75.109.163:48026). Sep 4 00:01:55.452147 ntpd[1518]: Deleting interface #12 lxc_health, fe80::2cc4:eff:fe94:18f3%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Sep 4 00:01:55.452852 ntpd[1518]: 4 Sep 00:01:55 ntpd[1518]: Deleting interface #12 lxc_health, fe80::2cc4:eff:fe94:18f3%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Sep 4 00:01:55.652028 kubelet[2839]: I0904 00:01:55.651932 2839 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4318e3-c789-4c2e-885d-5a3aca4657bc" path="/var/lib/kubelet/pods/3e4318e3-c789-4c2e-885d-5a3aca4657bc/volumes" Sep 4 00:01:55.653267 kubelet[2839]: I0904 00:01:55.653228 2839 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d483060-a36a-42d0-821e-31a1ebfed99c" path="/var/lib/kubelet/pods/4d483060-a36a-42d0-821e-31a1ebfed99c/volumes" Sep 4 00:01:55.660869 sshd[4560]: Accepted publickey for core from 147.75.109.163 port 48026 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:55.662994 sshd-session[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:55.671397 systemd-logind[1531]: New session 27 of user core. Sep 4 00:01:55.688937 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 00:01:56.904263 sshd[4562]: Connection closed by 147.75.109.163 port 48026 Sep 4 00:01:56.907305 sshd-session[4560]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:56.921903 systemd[1]: sshd@26-10.128.0.18:22-147.75.109.163:48026.service: Deactivated successfully. Sep 4 00:01:56.929825 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 00:01:56.938036 systemd-logind[1531]: Session 27 logged out. Waiting for processes to exit. Sep 4 00:01:56.944023 systemd-logind[1531]: Removed session 27. Sep 4 00:01:56.974138 systemd[1]: Created slice kubepods-burstable-pod4c5329a0_ceb8_451f_b0ea_50b3e7734426.slice - libcontainer container kubepods-burstable-pod4c5329a0_ceb8_451f_b0ea_50b3e7734426.slice. Sep 4 00:01:56.979134 systemd[1]: Started sshd@27-10.128.0.18:22-147.75.109.163:48028.service - OpenSSH per-connection server daemon (147.75.109.163:48028). Sep 4 00:01:57.015203 kubelet[2839]: I0904 00:01:57.013021 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c5329a0-ceb8-451f-b0ea-50b3e7734426-bpf-maps\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.015918 kubelet[2839]: I0904 00:01:57.015230 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c5329a0-ceb8-451f-b0ea-50b3e7734426-etc-cni-netd\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.015918 kubelet[2839]: I0904 00:01:57.015316 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c5329a0-ceb8-451f-b0ea-50b3e7734426-hubble-tls\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.015918 kubelet[2839]: I0904 00:01:57.015386 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c5329a0-ceb8-451f-b0ea-50b3e7734426-cilium-run\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.015918 kubelet[2839]: I0904 00:01:57.015419 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c5329a0-ceb8-451f-b0ea-50b3e7734426-cilium-cgroup\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.015918 kubelet[2839]: I0904 00:01:57.015486 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c5329a0-ceb8-451f-b0ea-50b3e7734426-xtables-lock\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.016233 kubelet[2839]: I0904 00:01:57.016146 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c5329a0-ceb8-451f-b0ea-50b3e7734426-host-proc-sys-kernel\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.016289 kubelet[2839]: I0904 00:01:57.016235 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszlj\" (UniqueName: \"kubernetes.io/projected/4c5329a0-ceb8-451f-b0ea-50b3e7734426-kube-api-access-rszlj\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.016364 kubelet[2839]: I0904 00:01:57.016300 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c5329a0-ceb8-451f-b0ea-50b3e7734426-cni-path\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.016422 kubelet[2839]: I0904 00:01:57.016377 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c5329a0-ceb8-451f-b0ea-50b3e7734426-clustermesh-secrets\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.016473 kubelet[2839]: I0904 00:01:57.016439 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c5329a0-ceb8-451f-b0ea-50b3e7734426-cilium-config-path\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.019008 kubelet[2839]: I0904 00:01:57.016524 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c5329a0-ceb8-451f-b0ea-50b3e7734426-lib-modules\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.019008 kubelet[2839]: I0904 00:01:57.016611 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4c5329a0-ceb8-451f-b0ea-50b3e7734426-cilium-ipsec-secrets\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.019008 kubelet[2839]: I0904 00:01:57.017125 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c5329a0-ceb8-451f-b0ea-50b3e7734426-hostproc\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.019008 kubelet[2839]: I0904 00:01:57.017211 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c5329a0-ceb8-451f-b0ea-50b3e7734426-host-proc-sys-net\") pod \"cilium-mg4zf\" (UID: \"4c5329a0-ceb8-451f-b0ea-50b3e7734426\") " pod="kube-system/cilium-mg4zf" Sep 4 00:01:57.293311 containerd[1603]: time="2025-09-04T00:01:57.292092466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mg4zf,Uid:4c5329a0-ceb8-451f-b0ea-50b3e7734426,Namespace:kube-system,Attempt:0,}" Sep 4 00:01:57.337936 containerd[1603]: time="2025-09-04T00:01:57.337864466Z" level=info msg="connecting to shim 53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29" address="unix:///run/containerd/s/5cb50ad650ba3a3f099b5ba86943bf367365efd8de43f2b0be378ead6d430d43" namespace=k8s.io protocol=ttrpc version=3 Sep 4 00:01:57.357120 sshd[4573]: Accepted publickey for core from 147.75.109.163 port 48028 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:57.361181 sshd-session[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:57.374480 systemd-logind[1531]: New session 28 of user core. Sep 4 00:01:57.379324 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 00:01:57.396355 systemd[1]: Started cri-containerd-53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29.scope - libcontainer container 53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29. Sep 4 00:01:57.447423 containerd[1603]: time="2025-09-04T00:01:57.447313989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mg4zf,Uid:4c5329a0-ceb8-451f-b0ea-50b3e7734426,Namespace:kube-system,Attempt:0,} returns sandbox id \"53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29\"" Sep 4 00:01:57.461792 containerd[1603]: time="2025-09-04T00:01:57.461689252Z" level=info msg="CreateContainer within sandbox \"53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 00:01:57.479628 containerd[1603]: time="2025-09-04T00:01:57.478414105Z" level=info msg="Container c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:01:57.489185 containerd[1603]: time="2025-09-04T00:01:57.489106422Z" level=info msg="CreateContainer within sandbox \"53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2\"" Sep 4 00:01:57.490039 containerd[1603]: time="2025-09-04T00:01:57.489997875Z" level=info msg="StartContainer for \"c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2\"" Sep 4 00:01:57.491699 containerd[1603]: time="2025-09-04T00:01:57.491659198Z" level=info msg="connecting to shim c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2" address="unix:///run/containerd/s/5cb50ad650ba3a3f099b5ba86943bf367365efd8de43f2b0be378ead6d430d43" protocol=ttrpc version=3 Sep 4 00:01:57.527419 systemd[1]: Started cri-containerd-c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2.scope - libcontainer container c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2. Sep 4 00:01:57.573000 sshd[4610]: Connection closed by 147.75.109.163 port 48028 Sep 4 00:01:57.573444 sshd-session[4573]: pam_unix(sshd:session): session closed for user core Sep 4 00:01:57.586864 systemd[1]: sshd@27-10.128.0.18:22-147.75.109.163:48028.service: Deactivated successfully. Sep 4 00:01:57.589724 systemd-logind[1531]: Session 28 logged out. Waiting for processes to exit. Sep 4 00:01:57.593992 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 00:01:57.600722 containerd[1603]: time="2025-09-04T00:01:57.600668473Z" level=info msg="StopPodSandbox for \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\"" Sep 4 00:01:57.600936 containerd[1603]: time="2025-09-04T00:01:57.600902654Z" level=info msg="TearDown network for sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" successfully" Sep 4 00:01:57.601018 containerd[1603]: time="2025-09-04T00:01:57.600936640Z" level=info msg="StopPodSandbox for \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" returns successfully" Sep 4 00:01:57.604632 systemd-logind[1531]: Removed session 28. Sep 4 00:01:57.607520 containerd[1603]: time="2025-09-04T00:01:57.607464533Z" level=info msg="RemovePodSandbox for \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\"" Sep 4 00:01:57.607717 containerd[1603]: time="2025-09-04T00:01:57.607642359Z" level=info msg="Forcibly stopping sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\"" Sep 4 00:01:57.609096 containerd[1603]: time="2025-09-04T00:01:57.608994746Z" level=info msg="TearDown network for sandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" successfully" Sep 4 00:01:57.618682 containerd[1603]: time="2025-09-04T00:01:57.617825401Z" level=info msg="Ensure that sandbox 239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae in task-service has been cleanup successfully" Sep 4 00:01:57.631958 containerd[1603]: time="2025-09-04T00:01:57.631876833Z" level=info msg="StartContainer for \"c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2\" returns successfully" Sep 4 00:01:57.643080 containerd[1603]: time="2025-09-04T00:01:57.641039360Z" level=info msg="RemovePodSandbox \"239f65a922e243f1728bf486e6d8f4c486e7331204fbb7e5b706a9234c1158ae\" returns successfully" Sep 4 00:01:57.643080 containerd[1603]: time="2025-09-04T00:01:57.642147721Z" level=info msg="StopPodSandbox for \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\"" Sep 4 00:01:57.643080 containerd[1603]: time="2025-09-04T00:01:57.642328829Z" level=info msg="TearDown network for sandbox \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\" successfully" Sep 4 00:01:57.643080 containerd[1603]: time="2025-09-04T00:01:57.642351175Z" level=info msg="StopPodSandbox for \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\" returns successfully" Sep 4 00:01:57.643426 systemd[1]: Started sshd@28-10.128.0.18:22-147.75.109.163:48044.service - OpenSSH per-connection server daemon (147.75.109.163:48044). Sep 4 00:01:57.648207 containerd[1603]: time="2025-09-04T00:01:57.647451185Z" level=info msg="RemovePodSandbox for \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\"" Sep 4 00:01:57.648207 containerd[1603]: time="2025-09-04T00:01:57.648192523Z" level=info msg="Forcibly stopping sandbox \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\"" Sep 4 00:01:57.648472 containerd[1603]: time="2025-09-04T00:01:57.648439519Z" level=info msg="TearDown network for sandbox \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\" successfully" Sep 4 00:01:57.649341 systemd[1]: cri-containerd-c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2.scope: Deactivated successfully. Sep 4 00:01:57.662890 containerd[1603]: time="2025-09-04T00:01:57.662849013Z" level=info msg="Ensure that sandbox 16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c in task-service has been cleanup successfully" Sep 4 00:01:57.664519 containerd[1603]: time="2025-09-04T00:01:57.664482964Z" level=info msg="received exit event container_id:\"c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2\" id:\"c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2\" pid:4638 exited_at:{seconds:1756944117 nanos:663333975}" Sep 4 00:01:57.665842 containerd[1603]: time="2025-09-04T00:01:57.665806138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2\" id:\"c19dfb579056283d1b149a146084baa9c070acf4512b8180f823d6d14b13c2a2\" pid:4638 exited_at:{seconds:1756944117 nanos:663333975}" Sep 4 00:01:57.675443 containerd[1603]: time="2025-09-04T00:01:57.675400361Z" level=info msg="RemovePodSandbox \"16b756d7a0fa65aff001d5ef9e7497f62881be77ffe122d4b8d56675df07fd8c\" returns successfully" Sep 4 00:01:57.778695 kubelet[2839]: E0904 00:01:57.778630 2839 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 00:01:57.999847 sshd[4660]: Accepted publickey for core from 147.75.109.163 port 48044 ssh2: RSA SHA256:YXdY3oiYEYSsF9UfuBnolXSYt1JubZZW1SENPyiblq0 Sep 4 00:01:58.002501 sshd-session[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 00:01:58.011098 systemd-logind[1531]: New session 29 of user core. Sep 4 00:01:58.023576 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 00:01:58.223988 containerd[1603]: time="2025-09-04T00:01:58.222298023Z" level=info msg="CreateContainer within sandbox \"53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 00:01:58.258611 containerd[1603]: time="2025-09-04T00:01:58.258445175Z" level=info msg="Container d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:01:58.266497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3029534917.mount: Deactivated successfully. Sep 4 00:01:58.281305 containerd[1603]: time="2025-09-04T00:01:58.281227013Z" level=info msg="CreateContainer within sandbox \"53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542\"" Sep 4 00:01:58.286278 containerd[1603]: time="2025-09-04T00:01:58.286229056Z" level=info msg="StartContainer for \"d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542\"" Sep 4 00:01:58.289473 containerd[1603]: time="2025-09-04T00:01:58.289389236Z" level=info msg="connecting to shim d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542" address="unix:///run/containerd/s/5cb50ad650ba3a3f099b5ba86943bf367365efd8de43f2b0be378ead6d430d43" protocol=ttrpc version=3 Sep 4 00:01:58.362587 systemd[1]: Started cri-containerd-d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542.scope - libcontainer container d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542. Sep 4 00:01:58.442773 containerd[1603]: time="2025-09-04T00:01:58.442668141Z" level=info msg="StartContainer for \"d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542\" returns successfully" Sep 4 00:01:58.452441 systemd[1]: cri-containerd-d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542.scope: Deactivated successfully. Sep 4 00:01:58.453787 containerd[1603]: time="2025-09-04T00:01:58.453698428Z" level=info msg="received exit event container_id:\"d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542\" id:\"d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542\" pid:4697 exited_at:{seconds:1756944118 nanos:453339210}" Sep 4 00:01:58.456618 containerd[1603]: time="2025-09-04T00:01:58.456552138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542\" id:\"d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542\" pid:4697 exited_at:{seconds:1756944118 nanos:453339210}" Sep 4 00:01:58.497744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5c8f1e6d0f4dcf6a4c247157c233979a9a829099aca4860228f333cd8608542-rootfs.mount: Deactivated successfully. Sep 4 00:01:59.230081 containerd[1603]: time="2025-09-04T00:01:59.229403616Z" level=info msg="CreateContainer within sandbox \"53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 00:01:59.255456 containerd[1603]: time="2025-09-04T00:01:59.255389092Z" level=info msg="Container 1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:01:59.271560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153941178.mount: Deactivated successfully. Sep 4 00:01:59.279536 containerd[1603]: time="2025-09-04T00:01:59.279488827Z" level=info msg="CreateContainer within sandbox \"53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199\"" Sep 4 00:01:59.281108 containerd[1603]: time="2025-09-04T00:01:59.280380130Z" level=info msg="StartContainer for \"1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199\"" Sep 4 00:01:59.284249 containerd[1603]: time="2025-09-04T00:01:59.284194607Z" level=info msg="connecting to shim 1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199" address="unix:///run/containerd/s/5cb50ad650ba3a3f099b5ba86943bf367365efd8de43f2b0be378ead6d430d43" protocol=ttrpc version=3 Sep 4 00:01:59.332496 systemd[1]: Started cri-containerd-1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199.scope - libcontainer container 1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199. Sep 4 00:01:59.408892 containerd[1603]: time="2025-09-04T00:01:59.408815485Z" level=info msg="StartContainer for \"1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199\" returns successfully" Sep 4 00:01:59.408946 systemd[1]: cri-containerd-1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199.scope: Deactivated successfully. Sep 4 00:01:59.415551 containerd[1603]: time="2025-09-04T00:01:59.415501510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199\" id:\"1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199\" pid:4742 exited_at:{seconds:1756944119 nanos:415234581}" Sep 4 00:01:59.415551 containerd[1603]: time="2025-09-04T00:01:59.415501796Z" level=info msg="received exit event container_id:\"1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199\" id:\"1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199\" pid:4742 exited_at:{seconds:1756944119 nanos:415234581}" Sep 4 00:01:59.478981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cedc1e2bd587f1d3ec2ff22307f09a3ef9c8e3d916e8c53ac97d95fda5cd199-rootfs.mount: Deactivated successfully. Sep 4 00:01:59.665120 kubelet[2839]: I0904 00:01:59.663608 2839 setters.go:618] "Node became not ready" node="ci-4372-1-0-nightly-20250903-2100-4b023332e2c65e689436" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T00:01:59Z","lastTransitionTime":"2025-09-04T00:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 00:02:00.235079 containerd[1603]: time="2025-09-04T00:02:00.234972389Z" level=info msg="CreateContainer within sandbox \"53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 00:02:00.251041 containerd[1603]: time="2025-09-04T00:02:00.250099158Z" level=info msg="Container db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:02:00.265255 containerd[1603]: time="2025-09-04T00:02:00.264769778Z" level=info msg="CreateContainer within sandbox \"53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25\"" Sep 4 00:02:00.266674 containerd[1603]: time="2025-09-04T00:02:00.266643684Z" level=info msg="StartContainer for \"db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25\"" Sep 4 00:02:00.279001 containerd[1603]: time="2025-09-04T00:02:00.278818152Z" level=info msg="connecting to shim db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25" address="unix:///run/containerd/s/5cb50ad650ba3a3f099b5ba86943bf367365efd8de43f2b0be378ead6d430d43" protocol=ttrpc version=3 Sep 4 00:02:00.346564 systemd[1]: Started cri-containerd-db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25.scope - libcontainer container db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25. Sep 4 00:02:00.435003 systemd[1]: cri-containerd-db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25.scope: Deactivated successfully. Sep 4 00:02:00.437760 containerd[1603]: time="2025-09-04T00:02:00.437706453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25\" id:\"db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25\" pid:4782 exited_at:{seconds:1756944120 nanos:436320954}" Sep 4 00:02:00.442976 containerd[1603]: time="2025-09-04T00:02:00.442899717Z" level=info msg="received exit event container_id:\"db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25\" id:\"db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25\" pid:4782 exited_at:{seconds:1756944120 nanos:436320954}" Sep 4 00:02:00.460736 containerd[1603]: time="2025-09-04T00:02:00.460682955Z" level=info msg="StartContainer for \"db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25\" returns successfully" Sep 4 00:02:00.497953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db118a8291310b8cf180949ed0a09336385272ffa3bdaaf868e211b1f1552e25-rootfs.mount: Deactivated successfully. Sep 4 00:02:01.241503 containerd[1603]: time="2025-09-04T00:02:01.241231229Z" level=info msg="CreateContainer within sandbox \"53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 00:02:01.268038 containerd[1603]: time="2025-09-04T00:02:01.267976600Z" level=info msg="Container b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53: CDI devices from CRI Config.CDIDevices: []" Sep 4 00:02:01.291934 containerd[1603]: time="2025-09-04T00:02:01.291712911Z" level=info msg="CreateContainer within sandbox \"53d1005097ffc887f0b80a8ce11970e954191318f7f823125fbd07c6e99bfe29\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53\"" Sep 4 00:02:01.295888 containerd[1603]: time="2025-09-04T00:02:01.293631045Z" level=info msg="StartContainer for \"b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53\"" Sep 4 00:02:01.297290 containerd[1603]: time="2025-09-04T00:02:01.297245905Z" level=info msg="connecting to shim b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53" address="unix:///run/containerd/s/5cb50ad650ba3a3f099b5ba86943bf367365efd8de43f2b0be378ead6d430d43" protocol=ttrpc version=3 Sep 4 00:02:01.348442 systemd[1]: Started cri-containerd-b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53.scope - libcontainer container b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53. Sep 4 00:02:01.419400 containerd[1603]: time="2025-09-04T00:02:01.419239272Z" level=info msg="StartContainer for \"b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53\" returns successfully" Sep 4 00:02:01.544702 containerd[1603]: time="2025-09-04T00:02:01.544553031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53\" id:\"a7d27a139765a93ebda55a6e31be9882387b8644250bb7b24124d25384a2aeb7\" pid:4848 exited_at:{seconds:1756944121 nanos:544216872}" Sep 4 00:02:02.275151 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 4 00:02:02.292902 kubelet[2839]: I0904 00:02:02.292683 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mg4zf" podStartSLOduration=6.292652039 podStartE2EDuration="6.292652039s" podCreationTimestamp="2025-09-04 00:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:02:02.29144781 +0000 UTC m=+125.014344194" watchObservedRunningTime="2025-09-04 00:02:02.292652039 +0000 UTC m=+125.015548391" Sep 4 00:02:02.774633 containerd[1603]: time="2025-09-04T00:02:02.774578278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53\" id:\"e9ce3336633886f47888e6926ebd110d4c673ea8385e886486142edf6e2e760c\" pid:4924 exit_status:1 exited_at:{seconds:1756944122 nanos:774191133}" Sep 4 00:02:05.107925 containerd[1603]: time="2025-09-04T00:02:05.107754572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53\" id:\"efbf730f273bf63b9674fad0afe4d09bea2d6c1bc8112adbb19a3157ed652236\" pid:5076 exit_status:1 exited_at:{seconds:1756944125 nanos:106750304}" Sep 4 00:02:06.617254 systemd-networkd[1457]: lxc_health: Link UP Sep 4 00:02:06.628046 systemd-networkd[1457]: lxc_health: Gained carrier Sep 4 00:02:07.494782 containerd[1603]: time="2025-09-04T00:02:07.494722164Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53\" id:\"2118d46aeb00542a3738c541e6fbc5b571a38ccb9f21c010b0161ba1e89af740\" pid:5389 exited_at:{seconds:1756944127 nanos:494123138}" Sep 4 00:02:07.891480 systemd-networkd[1457]: lxc_health: Gained IPv6LL Sep 4 00:02:09.729726 containerd[1603]: time="2025-09-04T00:02:09.729666186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53\" id:\"358cb5519362d34cf5abdf5ca2302981c241099c211ce6ce2e647e604eaa4079\" pid:5427 exited_at:{seconds:1756944129 nanos:728044842}" Sep 4 00:02:10.451951 ntpd[1518]: Listen normally on 15 lxc_health [fe80::dc66:8cff:fe91:c65%14]:123 Sep 4 00:02:10.452746 ntpd[1518]: 4 Sep 00:02:10 ntpd[1518]: Listen normally on 15 lxc_health [fe80::dc66:8cff:fe91:c65%14]:123 Sep 4 00:02:11.992416 containerd[1603]: time="2025-09-04T00:02:11.992351180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b553ec6fbdd130585b7af98ba92a39ca6fad29a7b2c04086cc31d4072e3f3d53\" id:\"142af4cd7eb479ebc0c51e28f5f9074511439cdd8fb18af2d47d27c718a1edde\" pid:5451 exited_at:{seconds:1756944131 nanos:991027784}" Sep 4 00:02:12.049961 sshd[4677]: Connection closed by 147.75.109.163 port 48044 Sep 4 00:02:12.051449 sshd-session[4660]: pam_unix(sshd:session): session closed for user core Sep 4 00:02:12.065251 systemd-logind[1531]: Session 29 logged out. Waiting for processes to exit. Sep 4 00:02:12.069113 systemd[1]: sshd@28-10.128.0.18:22-147.75.109.163:48044.service: Deactivated successfully. Sep 4 00:02:12.075762 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 00:02:12.081539 systemd-logind[1531]: Removed session 29.