Jul 15 23:58:35.159235 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 22:01:05 -00 2025 Jul 15 23:58:35.159282 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:58:35.159301 kernel: BIOS-provided physical RAM map: Jul 15 23:58:35.159315 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jul 15 23:58:35.159328 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jul 15 23:58:35.159341 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jul 15 23:58:35.159361 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jul 15 23:58:35.159375 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jul 15 23:58:35.159389 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd32afff] usable Jul 15 23:58:35.159403 kernel: BIOS-e820: [mem 0x00000000bd32b000-0x00000000bd332fff] ACPI data Jul 15 23:58:35.159461 kernel: BIOS-e820: [mem 0x00000000bd333000-0x00000000bf8ecfff] usable Jul 15 23:58:35.159475 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jul 15 23:58:35.159489 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jul 15 23:58:35.159503 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jul 15 23:58:35.159525 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jul 15 23:58:35.159541 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jul 15 23:58:35.159556 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jul 15 23:58:35.159571 kernel: NX (Execute Disable) protection: active Jul 15 23:58:35.159586 kernel: APIC: Static calls initialized Jul 15 23:58:35.159601 kernel: efi: EFI v2.7 by EDK II Jul 15 23:58:35.159617 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32b018 Jul 15 23:58:35.159634 kernel: random: crng init done Jul 15 23:58:35.159653 kernel: secureboot: Secure boot disabled Jul 15 23:58:35.159668 kernel: SMBIOS 2.4 present. Jul 15 23:58:35.159684 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Jul 15 23:58:35.159698 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:58:35.159713 kernel: Hypervisor detected: KVM Jul 15 23:58:35.159729 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 23:58:35.159744 kernel: kvm-clock: using sched offset of 15101341862 cycles Jul 15 23:58:35.159761 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 23:58:35.159777 kernel: tsc: Detected 2299.998 MHz processor Jul 15 23:58:35.159793 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 23:58:35.159836 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 23:58:35.159851 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jul 15 23:58:35.159867 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jul 15 23:58:35.159883 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 23:58:35.159898 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jul 15 23:58:35.159913 kernel: Using GB pages for direct mapping Jul 15 23:58:35.159928 kernel: ACPI: Early table checksum verification disabled Jul 15 23:58:35.159945 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jul 15 23:58:35.159971 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jul 15 23:58:35.159988 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jul 15 23:58:35.160005 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jul 15 23:58:35.160021 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jul 15 23:58:35.160038 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Jul 15 23:58:35.160054 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jul 15 23:58:35.160073 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jul 15 23:58:35.160090 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jul 15 23:58:35.160107 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jul 15 23:58:35.160124 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jul 15 23:58:35.160140 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jul 15 23:58:35.160156 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jul 15 23:58:35.160173 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jul 15 23:58:35.160190 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jul 15 23:58:35.160206 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jul 15 23:58:35.160226 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jul 15 23:58:35.160242 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jul 15 23:58:35.160259 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jul 15 23:58:35.160276 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jul 15 23:58:35.160293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 15 23:58:35.160310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jul 15 23:58:35.160327 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jul 15 23:58:35.160343 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Jul 15 23:58:35.160360 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Jul 15 23:58:35.160381 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] Jul 15 23:58:35.160397 kernel: Zone ranges: Jul 15 23:58:35.160427 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 23:58:35.160444 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 15 23:58:35.160460 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jul 15 23:58:35.160477 kernel: Device empty Jul 15 23:58:35.160494 kernel: Movable zone start for each node Jul 15 23:58:35.160510 kernel: Early memory node ranges Jul 15 23:58:35.160527 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jul 15 23:58:35.160549 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jul 15 23:58:35.160566 kernel: node 0: [mem 0x0000000000100000-0x00000000bd32afff] Jul 15 23:58:35.160581 kernel: node 0: [mem 0x00000000bd333000-0x00000000bf8ecfff] Jul 15 23:58:35.160597 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jul 15 23:58:35.160614 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jul 15 23:58:35.160632 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jul 15 23:58:35.160648 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 23:58:35.160664 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jul 15 23:58:35.160681 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jul 15 23:58:35.160698 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jul 15 23:58:35.160719 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 15 23:58:35.160736 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jul 15 23:58:35.160753 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 15 23:58:35.160770 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 23:58:35.160787 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 23:58:35.162566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 23:58:35.162589 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 23:58:35.162607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 23:58:35.162629 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 23:58:35.162646 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 23:58:35.162662 kernel: CPU topo: Max. logical packages: 1 Jul 15 23:58:35.162679 kernel: CPU topo: Max. logical dies: 1 Jul 15 23:58:35.162695 kernel: CPU topo: Max. dies per package: 1 Jul 15 23:58:35.162711 kernel: CPU topo: Max. threads per core: 2 Jul 15 23:58:35.162728 kernel: CPU topo: Num. cores per package: 1 Jul 15 23:58:35.162744 kernel: CPU topo: Num. threads per package: 2 Jul 15 23:58:35.162760 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 23:58:35.162777 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 15 23:58:35.162823 kernel: Booting paravirtualized kernel on KVM Jul 15 23:58:35.162842 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 23:58:35.162859 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 23:58:35.162876 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 23:58:35.162893 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 23:58:35.162909 kernel: pcpu-alloc: [0] 0 1 Jul 15 23:58:35.162925 kernel: kvm-guest: PV spinlocks enabled Jul 15 23:58:35.162941 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 23:58:35.162960 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:58:35.162982 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:58:35.162998 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 15 23:58:35.163015 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:58:35.163032 kernel: Fallback order for Node 0: 0 Jul 15 23:58:35.163049 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Jul 15 23:58:35.163065 kernel: Policy zone: Normal Jul 15 23:58:35.163082 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:58:35.163099 kernel: software IO TLB: area num 2. Jul 15 23:58:35.163131 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 23:58:35.163149 kernel: Kernel/User page tables isolation: enabled Jul 15 23:58:35.163167 kernel: ftrace: allocating 40095 entries in 157 pages Jul 15 23:58:35.163188 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 23:58:35.163205 kernel: Dynamic Preempt: voluntary Jul 15 23:58:35.163223 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:58:35.163242 kernel: rcu: RCU event tracing is enabled. Jul 15 23:58:35.163260 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 23:58:35.163278 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:58:35.163298 kernel: Rude variant of Tasks RCU enabled. Jul 15 23:58:35.163316 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:58:35.163333 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:58:35.163351 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 23:58:35.163369 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:58:35.163386 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:58:35.163404 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:58:35.163432 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 15 23:58:35.163449 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:58:35.163465 kernel: Console: colour dummy device 80x25 Jul 15 23:58:35.163482 kernel: printk: legacy console [ttyS0] enabled Jul 15 23:58:35.163499 kernel: ACPI: Core revision 20240827 Jul 15 23:58:35.163517 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 23:58:35.163536 kernel: x2apic enabled Jul 15 23:58:35.163554 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 23:58:35.163572 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jul 15 23:58:35.163591 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 15 23:58:35.163614 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jul 15 23:58:35.163632 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jul 15 23:58:35.163651 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jul 15 23:58:35.163669 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 23:58:35.163688 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 15 23:58:35.163706 kernel: Spectre V2 : Mitigation: IBRS Jul 15 23:58:35.163725 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 23:58:35.163743 kernel: RETBleed: Mitigation: IBRS Jul 15 23:58:35.163765 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 23:58:35.163783 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jul 15 23:58:35.164905 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 23:58:35.164930 kernel: MDS: Mitigation: Clear CPU buffers Jul 15 23:58:35.164950 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 23:58:35.164969 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 15 23:58:35.164988 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 23:58:35.165006 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 23:58:35.165025 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 23:58:35.165050 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 23:58:35.165069 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 15 23:58:35.165088 kernel: Freeing SMP alternatives memory: 32K Jul 15 23:58:35.165106 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:58:35.165124 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:58:35.165143 kernel: landlock: Up and running. Jul 15 23:58:35.165161 kernel: SELinux: Initializing. Jul 15 23:58:35.165179 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 23:58:35.165198 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 23:58:35.165246 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jul 15 23:58:35.165265 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jul 15 23:58:35.165283 kernel: signal: max sigframe size: 1776 Jul 15 23:58:35.165301 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:58:35.165321 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:58:35.165339 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:58:35.165358 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 15 23:58:35.165376 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:58:35.165395 kernel: smpboot: x86: Booting SMP configuration: Jul 15 23:58:35.165423 kernel: .... node #0, CPUs: #1 Jul 15 23:58:35.165444 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 15 23:58:35.165464 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 15 23:58:35.165482 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 23:58:35.165500 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 15 23:58:35.165519 kernel: Memory: 7564264K/7860552K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 290712K reserved, 0K cma-reserved) Jul 15 23:58:35.165537 kernel: devtmpfs: initialized Jul 15 23:58:35.165556 kernel: x86/mm: Memory block size: 128MB Jul 15 23:58:35.165578 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jul 15 23:58:35.165597 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:58:35.165616 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 23:58:35.165634 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:58:35.165652 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:58:35.165670 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:58:35.165689 kernel: audit: type=2000 audit(1752623910.222:1): state=initialized audit_enabled=0 res=1 Jul 15 23:58:35.165706 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:58:35.165724 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 23:58:35.165745 kernel: cpuidle: using governor menu Jul 15 23:58:35.165764 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:58:35.165783 kernel: dca service started, version 1.12.1 Jul 15 23:58:35.166825 kernel: PCI: Using configuration type 1 for base access Jul 15 23:58:35.166851 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 23:58:35.166870 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:58:35.166887 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:58:35.166905 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:58:35.166923 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:58:35.166946 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:58:35.166965 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:58:35.166983 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:58:35.167002 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 15 23:58:35.167021 kernel: ACPI: Interpreter enabled Jul 15 23:58:35.167037 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 23:58:35.167056 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 23:58:35.167072 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 23:58:35.167090 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 15 23:58:35.167113 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 15 23:58:35.167132 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:58:35.167389 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:58:35.167590 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 15 23:58:35.167773 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 15 23:58:35.168837 kernel: PCI host bridge to bus 0000:00 Jul 15 23:58:35.169054 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 23:58:35.169231 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 23:58:35.169392 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 23:58:35.169574 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jul 15 23:58:35.169742 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:58:35.172021 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:58:35.172249 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jul 15 23:58:35.172472 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 15 23:58:35.172673 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 15 23:58:35.172917 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Jul 15 23:58:35.173112 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 15 23:58:35.173299 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Jul 15 23:58:35.173500 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 23:58:35.173688 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Jul 15 23:58:35.175910 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Jul 15 23:58:35.176129 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:58:35.176318 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Jul 15 23:58:35.176516 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Jul 15 23:58:35.176539 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 23:58:35.176558 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 23:58:35.176576 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 23:58:35.176599 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 23:58:35.176619 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 15 23:58:35.176642 kernel: iommu: Default domain type: Translated Jul 15 23:58:35.176662 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 23:58:35.176686 kernel: efivars: Registered efivars operations Jul 15 23:58:35.176706 kernel: PCI: Using ACPI for IRQ routing Jul 15 23:58:35.176723 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 23:58:35.176740 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jul 15 23:58:35.176757 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jul 15 23:58:35.176779 kernel: e820: reserve RAM buffer [mem 0xbd32b000-0xbfffffff] Jul 15 23:58:35.176811 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jul 15 23:58:35.176830 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jul 15 23:58:35.176847 kernel: vgaarb: loaded Jul 15 23:58:35.176865 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 23:58:35.176882 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:58:35.176900 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:58:35.176917 kernel: pnp: PnP ACPI init Jul 15 23:58:35.176935 kernel: pnp: PnP ACPI: found 7 devices Jul 15 23:58:35.176957 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 23:58:35.176975 kernel: NET: Registered PF_INET protocol family Jul 15 23:58:35.176993 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 15 23:58:35.177011 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 15 23:58:35.177028 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:58:35.177046 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:58:35.177063 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 15 23:58:35.177081 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 15 23:58:35.177099 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 15 23:58:35.177120 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 15 23:58:35.177138 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:58:35.177155 kernel: NET: Registered PF_XDP protocol family Jul 15 23:58:35.177339 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 23:58:35.177509 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 23:58:35.177668 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 23:58:35.179874 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jul 15 23:58:35.180079 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 15 23:58:35.180109 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:58:35.180128 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 15 23:58:35.180146 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jul 15 23:58:35.180164 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 15 23:58:35.180182 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 15 23:58:35.180200 kernel: clocksource: Switched to clocksource tsc Jul 15 23:58:35.180218 kernel: Initialise system trusted keyrings Jul 15 23:58:35.180235 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 15 23:58:35.180256 kernel: Key type asymmetric registered Jul 15 23:58:35.180274 kernel: Asymmetric key parser 'x509' registered Jul 15 23:58:35.180292 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 23:58:35.180309 kernel: io scheduler mq-deadline registered Jul 15 23:58:35.180327 kernel: io scheduler kyber registered Jul 15 23:58:35.180344 kernel: io scheduler bfq registered Jul 15 23:58:35.180361 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 23:58:35.180380 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 15 23:58:35.180573 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jul 15 23:58:35.180600 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 15 23:58:35.180789 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jul 15 23:58:35.180827 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 15 23:58:35.181005 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jul 15 23:58:35.181027 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:58:35.181045 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 23:58:35.181063 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 15 23:58:35.181081 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jul 15 23:58:35.181098 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jul 15 23:58:35.181284 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jul 15 23:58:35.181308 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 23:58:35.181327 kernel: i8042: Warning: Keylock active Jul 15 23:58:35.181345 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 23:58:35.181363 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 23:58:35.181564 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 15 23:58:35.181845 kernel: rtc_cmos 00:00: registered as rtc0 Jul 15 23:58:35.182042 kernel: rtc_cmos 00:00: setting system clock to 2025-07-15T23:58:34 UTC (1752623914) Jul 15 23:58:35.182221 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 15 23:58:35.182245 kernel: intel_pstate: CPU model not supported Jul 15 23:58:35.182264 kernel: pstore: Using crash dump compression: deflate Jul 15 23:58:35.182282 kernel: pstore: Registered efi_pstore as persistent store backend Jul 15 23:58:35.182301 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:58:35.182319 kernel: Segment Routing with IPv6 Jul 15 23:58:35.182345 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:58:35.182363 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:58:35.182386 kernel: Key type dns_resolver registered Jul 15 23:58:35.182403 kernel: IPI shorthand broadcast: enabled Jul 15 23:58:35.182430 kernel: sched_clock: Marking stable (4112006348, 177469176)->(4448483200, -159007676) Jul 15 23:58:35.182448 kernel: registered taskstats version 1 Jul 15 23:58:35.182466 kernel: Loading compiled-in X.509 certificates Jul 15 23:58:35.182483 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: cfc533be64675f3c66ee10d42aa8c5ce2115881d' Jul 15 23:58:35.182499 kernel: Demotion targets for Node 0: null Jul 15 23:58:35.182516 kernel: Key type .fscrypt registered Jul 15 23:58:35.182534 kernel: Key type fscrypt-provisioning registered Jul 15 23:58:35.182556 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:58:35.182574 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 23:58:35.182591 kernel: ima: No architecture policies found Jul 15 23:58:35.182609 kernel: clk: Disabling unused clocks Jul 15 23:58:35.182627 kernel: Warning: unable to open an initial console. Jul 15 23:58:35.182645 kernel: Freeing unused kernel image (initmem) memory: 54424K Jul 15 23:58:35.182663 kernel: Write protecting the kernel read-only data: 24576k Jul 15 23:58:35.182681 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 23:58:35.182703 kernel: Run /init as init process Jul 15 23:58:35.182721 kernel: with arguments: Jul 15 23:58:35.182739 kernel: /init Jul 15 23:58:35.182756 kernel: with environment: Jul 15 23:58:35.182772 kernel: HOME=/ Jul 15 23:58:35.182789 kernel: TERM=linux Jul 15 23:58:35.184358 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:58:35.184385 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:58:35.184424 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:58:35.184443 systemd[1]: Detected virtualization google. Jul 15 23:58:35.184460 systemd[1]: Detected architecture x86-64. Jul 15 23:58:35.184477 systemd[1]: Running in initrd. Jul 15 23:58:35.184495 systemd[1]: No hostname configured, using default hostname. Jul 15 23:58:35.184514 systemd[1]: Hostname set to . Jul 15 23:58:35.184531 systemd[1]: Initializing machine ID from random generator. Jul 15 23:58:35.184549 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:58:35.184572 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:58:35.184609 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:58:35.184635 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:58:35.184656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:58:35.184675 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:58:35.184698 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:58:35.184720 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:58:35.184741 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:58:35.184761 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:58:35.184783 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:58:35.184829 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:58:35.184849 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:58:35.184870 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:58:35.184892 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:58:35.184916 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:58:35.184935 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:58:35.184956 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:58:35.184974 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:58:35.184994 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:58:35.185015 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:58:35.185035 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:58:35.185057 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:58:35.185078 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:58:35.185098 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:58:35.185119 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:58:35.185139 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:58:35.185158 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:58:35.185178 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:58:35.185198 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:58:35.185241 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:58:35.185266 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:58:35.185325 systemd-journald[207]: Collecting audit messages is disabled. Jul 15 23:58:35.185374 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:58:35.185394 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:58:35.185423 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:58:35.185444 systemd-journald[207]: Journal started Jul 15 23:58:35.185486 systemd-journald[207]: Runtime Journal (/run/log/journal/39ae8be2d5624375af97f9e25ee8e6f6) is 8M, max 148.9M, 140.9M free. Jul 15 23:58:35.187820 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:58:35.194213 systemd-modules-load[208]: Inserted module 'overlay' Jul 15 23:58:35.195352 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:58:35.214148 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:58:35.217990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:58:35.230108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:58:35.232411 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:58:35.240787 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:58:35.253591 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:58:35.255049 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:58:35.258986 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 15 23:58:35.262976 kernel: Bridge firewalling registered Jul 15 23:58:35.261417 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:58:35.270222 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:58:35.273096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:58:35.294226 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:58:35.296793 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:58:35.303265 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:58:35.311976 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:58:35.345931 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:58:35.367171 systemd-resolved[245]: Positive Trust Anchors: Jul 15 23:58:35.367719 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:58:35.367941 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:58:35.377376 systemd-resolved[245]: Defaulting to hostname 'linux'. Jul 15 23:58:35.381504 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:58:35.387027 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:58:35.467858 kernel: SCSI subsystem initialized Jul 15 23:58:35.479832 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:58:35.492845 kernel: iscsi: registered transport (tcp) Jul 15 23:58:35.518181 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:58:35.518266 kernel: QLogic iSCSI HBA Driver Jul 15 23:58:35.542073 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:58:35.561197 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:58:35.565260 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:58:35.627452 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:58:35.630492 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:58:35.688875 kernel: raid6: avx2x4 gen() 17904 MB/s Jul 15 23:58:35.705839 kernel: raid6: avx2x2 gen() 17986 MB/s Jul 15 23:58:35.723253 kernel: raid6: avx2x1 gen() 14204 MB/s Jul 15 23:58:35.723306 kernel: raid6: using algorithm avx2x2 gen() 17986 MB/s Jul 15 23:58:35.741458 kernel: raid6: .... xor() 18547 MB/s, rmw enabled Jul 15 23:58:35.741517 kernel: raid6: using avx2x2 recovery algorithm Jul 15 23:58:35.765846 kernel: xor: automatically using best checksumming function avx Jul 15 23:58:35.951847 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:58:35.960218 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:58:35.963731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:58:35.997305 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jul 15 23:58:36.006203 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:58:36.013671 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:58:36.051236 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jul 15 23:58:36.084047 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:58:36.090944 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:58:36.185198 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:58:36.190023 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:58:36.284851 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 23:58:36.289576 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Jul 15 23:58:36.317415 kernel: scsi host0: Virtio SCSI HBA Jul 15 23:58:36.357831 kernel: AES CTR mode by8 optimization enabled Jul 15 23:58:36.389872 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jul 15 23:58:36.462863 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 23:58:36.462946 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jul 15 23:58:36.463278 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jul 15 23:58:36.463553 kernel: sd 0:0:1:0: [sda] Write Protect is off Jul 15 23:58:36.463785 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jul 15 23:58:36.464048 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 15 23:58:36.471208 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:58:36.471420 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:58:36.477456 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:58:36.490057 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:58:36.490101 kernel: GPT:17805311 != 25165823 Jul 15 23:58:36.490123 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:58:36.490143 kernel: GPT:17805311 != 25165823 Jul 15 23:58:36.490162 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:58:36.490317 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:58:36.490342 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jul 15 23:58:36.495244 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:58:36.499472 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:58:36.559617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:58:36.587199 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jul 15 23:58:36.613539 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jul 15 23:58:36.619226 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:58:36.635068 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jul 15 23:58:36.635345 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jul 15 23:58:36.656092 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 15 23:58:36.668810 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:58:36.669046 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:58:36.678946 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:58:36.685084 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:58:36.699174 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:58:36.711089 disk-uuid[609]: Primary Header is updated. Jul 15 23:58:36.711089 disk-uuid[609]: Secondary Entries is updated. Jul 15 23:58:36.711089 disk-uuid[609]: Secondary Header is updated. Jul 15 23:58:36.726604 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:58:36.729871 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:58:36.760842 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:58:37.779856 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:58:37.780504 disk-uuid[610]: The operation has completed successfully. Jul 15 23:58:37.866502 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:58:37.866659 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:58:37.915579 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:58:37.932120 sh[631]: Success Jul 15 23:58:37.956409 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:58:37.957308 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:58:37.957354 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:58:37.969826 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 15 23:58:38.065933 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:58:38.071940 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:58:38.093399 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:58:38.116888 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:58:38.116967 kernel: BTRFS: device fsid 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (643) Jul 15 23:58:38.122360 kernel: BTRFS info (device dm-0): first mount of filesystem 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e Jul 15 23:58:38.122430 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:58:38.122456 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:58:38.154438 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:58:38.155321 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:58:38.159492 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:58:38.161742 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:58:38.172355 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:58:38.213843 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (677) Jul 15 23:58:38.217963 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:58:38.218036 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:58:38.218063 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:58:38.230850 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:58:38.231362 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:58:38.235843 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:58:38.329929 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:58:38.338498 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:58:38.433015 systemd-networkd[812]: lo: Link UP Jul 15 23:58:38.433415 systemd-networkd[812]: lo: Gained carrier Jul 15 23:58:38.435928 systemd-networkd[812]: Enumeration completed Jul 15 23:58:38.436078 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:58:38.438324 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:58:38.438331 systemd-networkd[812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:58:38.440402 systemd-networkd[812]: eth0: Link UP Jul 15 23:58:38.440409 systemd-networkd[812]: eth0: Gained carrier Jul 15 23:58:38.440424 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:58:38.460287 systemd-networkd[812]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f.c.flatcar-212911.internal' to 'ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f' Jul 15 23:58:38.460310 systemd-networkd[812]: eth0: DHCPv4 address 10.128.0.76/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 15 23:58:38.465235 systemd[1]: Reached target network.target - Network. Jul 15 23:58:38.511161 ignition[736]: Ignition 2.21.0 Jul 15 23:58:38.511175 ignition[736]: Stage: fetch-offline Jul 15 23:58:38.514405 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:58:38.511218 ignition[736]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:58:38.519307 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 23:58:38.511229 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:58:38.511353 ignition[736]: parsed url from cmdline: "" Jul 15 23:58:38.511358 ignition[736]: no config URL provided Jul 15 23:58:38.511364 ignition[736]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:58:38.511373 ignition[736]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:58:38.511381 ignition[736]: failed to fetch config: resource requires networking Jul 15 23:58:38.511853 ignition[736]: Ignition finished successfully Jul 15 23:58:38.551929 ignition[821]: Ignition 2.21.0 Jul 15 23:58:38.551947 ignition[821]: Stage: fetch Jul 15 23:58:38.552174 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:58:38.552190 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:58:38.552321 ignition[821]: parsed url from cmdline: "" Jul 15 23:58:38.552328 ignition[821]: no config URL provided Jul 15 23:58:38.552338 ignition[821]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:58:38.552352 ignition[821]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:58:38.567658 unknown[821]: fetched base config from "system" Jul 15 23:58:38.552417 ignition[821]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jul 15 23:58:38.567669 unknown[821]: fetched base config from "system" Jul 15 23:58:38.558254 ignition[821]: GET result: OK Jul 15 23:58:38.567680 unknown[821]: fetched user config from "gcp" Jul 15 23:58:38.558372 ignition[821]: parsing config with SHA512: 4b5d03f25b5f698ed654d0df218c16019bf3d717923767b550508206171c37f7ab9ec47fe29fbb73f1cf50a9860b516963560c7323ed61777b283544ffe4e308 Jul 15 23:58:38.571369 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 23:58:38.568210 ignition[821]: fetch: fetch complete Jul 15 23:58:38.578675 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:58:38.568216 ignition[821]: fetch: fetch passed Jul 15 23:58:38.568272 ignition[821]: Ignition finished successfully Jul 15 23:58:38.621209 ignition[828]: Ignition 2.21.0 Jul 15 23:58:38.621226 ignition[828]: Stage: kargs Jul 15 23:58:38.621449 ignition[828]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:58:38.625719 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:58:38.621468 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:58:38.627693 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:58:38.623492 ignition[828]: kargs: kargs passed Jul 15 23:58:38.623573 ignition[828]: Ignition finished successfully Jul 15 23:58:38.665889 ignition[835]: Ignition 2.21.0 Jul 15 23:58:38.666232 ignition[835]: Stage: disks Jul 15 23:58:38.666475 ignition[835]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:58:38.666487 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:58:38.672210 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:58:38.669854 ignition[835]: disks: disks passed Jul 15 23:58:38.674439 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:58:38.670019 ignition[835]: Ignition finished successfully Jul 15 23:58:38.681960 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:58:38.686923 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:58:38.690954 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:58:38.695942 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:58:38.701445 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:58:38.753450 systemd-fsck[844]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 15 23:58:38.763422 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:58:38.771794 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:58:38.949816 kernel: EXT4-fs (sda9): mounted filesystem e7011b63-42ae-44ea-90bf-c826e39292b2 r/w with ordered data mode. Quota mode: none. Jul 15 23:58:38.950893 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:58:38.954641 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:58:38.960351 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:58:38.976659 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:58:38.980483 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:58:38.980570 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:58:38.998311 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (852) Jul 15 23:58:38.998358 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:58:38.980614 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:58:39.003975 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:58:39.004055 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:58:39.000293 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:58:39.005671 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:58:39.015451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:58:39.125265 initrd-setup-root[876]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:58:39.133718 initrd-setup-root[883]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:58:39.140658 initrd-setup-root[890]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:58:39.147067 initrd-setup-root[897]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:58:39.387910 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:58:39.394569 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:58:39.398291 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:58:39.424029 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:58:39.426947 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:58:39.461965 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:58:39.468999 ignition[964]: INFO : Ignition 2.21.0 Jul 15 23:58:39.468999 ignition[964]: INFO : Stage: mount Jul 15 23:58:39.468999 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:58:39.468999 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:58:39.468999 ignition[964]: INFO : mount: mount passed Jul 15 23:58:39.468999 ignition[964]: INFO : Ignition finished successfully Jul 15 23:58:39.469038 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:58:39.470714 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:58:39.502107 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:58:39.533838 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (977) Jul 15 23:58:39.536518 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:58:39.536573 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:58:39.536599 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:58:39.545825 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:58:39.578346 ignition[994]: INFO : Ignition 2.21.0 Jul 15 23:58:39.578346 ignition[994]: INFO : Stage: files Jul 15 23:58:39.582968 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:58:39.582968 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:58:39.588937 ignition[994]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:58:39.588937 ignition[994]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:58:39.588937 ignition[994]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:58:39.598949 ignition[994]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:58:39.598949 ignition[994]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:58:39.598949 ignition[994]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:58:39.598949 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 23:58:39.598949 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 23:58:39.594370 unknown[994]: wrote ssh authorized keys file for user: core Jul 15 23:58:39.718260 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:58:40.188104 systemd-networkd[812]: eth0: Gained IPv6LL Jul 15 23:58:40.196241 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 23:58:40.201029 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:58:40.201029 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 23:58:40.394063 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 23:58:40.543029 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:58:40.543029 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:58:40.552205 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 23:58:40.853938 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 23:58:41.249970 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:58:41.249970 ignition[994]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 23:58:41.257970 ignition[994]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:58:41.257970 ignition[994]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:58:41.257970 ignition[994]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 23:58:41.257970 ignition[994]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:58:41.257970 ignition[994]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:58:41.257970 ignition[994]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:58:41.257970 ignition[994]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:58:41.257970 ignition[994]: INFO : files: files passed Jul 15 23:58:41.257970 ignition[994]: INFO : Ignition finished successfully Jul 15 23:58:41.260354 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:58:41.263900 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:58:41.276048 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:58:41.304259 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:58:41.304421 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:58:41.316012 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:58:41.316012 initrd-setup-root-after-ignition[1022]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:58:41.320138 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:58:41.319457 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:58:41.326673 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:58:41.332225 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:58:41.398369 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:58:41.398528 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:58:41.403596 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:58:41.407183 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:58:41.412289 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:58:41.413873 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:58:41.446940 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:58:41.449872 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:58:41.480256 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:58:41.480699 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:58:41.485375 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:58:41.490380 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:58:41.490617 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:58:41.504036 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:58:41.507131 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:58:41.513084 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:58:41.519109 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:58:41.526130 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:58:41.532282 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:58:41.535295 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:58:41.540305 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:58:41.544345 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:58:41.549331 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:58:41.553278 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:58:41.557237 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:58:41.557466 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:58:41.567998 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:58:41.571238 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:58:41.574187 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:58:41.574469 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:58:41.579259 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:58:41.579474 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:58:41.595000 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:58:41.595538 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:58:41.599378 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:58:41.599563 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:58:41.605650 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:58:41.616983 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:58:41.617506 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:58:41.620564 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:58:41.629974 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:58:41.631156 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:58:41.637484 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:58:41.638954 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:58:41.654824 ignition[1047]: INFO : Ignition 2.21.0 Jul 15 23:58:41.654824 ignition[1047]: INFO : Stage: umount Jul 15 23:58:41.654824 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:58:41.654824 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:58:41.668921 ignition[1047]: INFO : umount: umount passed Jul 15 23:58:41.668921 ignition[1047]: INFO : Ignition finished successfully Jul 15 23:58:41.657643 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:58:41.659795 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:58:41.668060 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:58:41.669040 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:58:41.669187 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:58:41.676374 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:58:41.676519 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:58:41.686291 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:58:41.686404 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:58:41.694199 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:58:41.694288 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:58:41.697201 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 23:58:41.697382 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 23:58:41.701230 systemd[1]: Stopped target network.target - Network. Jul 15 23:58:41.706159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:58:41.706347 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:58:41.710291 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:58:41.714161 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:58:41.718169 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:58:41.721088 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:58:41.725138 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:58:41.729169 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:58:41.729226 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:58:41.733153 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:58:41.733341 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:58:41.737209 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:58:41.737417 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:58:41.741168 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:58:41.741331 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:58:41.746169 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:58:41.746416 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:58:41.751666 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:58:41.759907 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:58:41.763411 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:58:41.763667 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:58:41.770724 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:58:41.771023 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:58:41.771171 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:58:41.776849 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:58:41.778146 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:58:41.783109 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:58:41.783166 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:58:41.788419 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:58:41.792904 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:58:41.792988 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:58:41.796145 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:58:41.796202 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:58:41.802289 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:58:41.802345 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:58:41.810006 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:58:41.810104 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:58:41.815384 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:58:41.824537 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:58:41.824660 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:58:41.830132 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:58:41.830644 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:58:41.842663 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:58:41.842787 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:58:41.851993 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:58:41.852073 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:58:41.857960 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:58:41.858070 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:58:41.865936 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:58:41.866029 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:58:41.872928 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:58:41.873050 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:58:41.882190 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:58:41.892920 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:58:41.893039 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:58:41.897270 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:58:41.897356 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:58:41.910515 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:58:41.910581 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:58:41.916624 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:58:41.916700 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:58:41.916757 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:58:41.917351 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:58:41.917482 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:58:42.004965 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 15 23:58:41.922312 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:58:41.922464 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:58:41.928125 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:58:41.931848 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:58:41.958848 systemd[1]: Switching root. Jul 15 23:58:42.018888 systemd-journald[207]: Journal stopped Jul 15 23:58:44.168038 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:58:44.168095 kernel: SELinux: policy capability open_perms=1 Jul 15 23:58:44.168117 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:58:44.168147 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:58:44.168166 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:58:44.168186 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:58:44.168212 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:58:44.168230 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:58:44.168249 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:58:44.168268 kernel: audit: type=1403 audit(1752623922.687:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:58:44.168290 systemd[1]: Successfully loaded SELinux policy in 52.131ms. Jul 15 23:58:44.168336 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.422ms. Jul 15 23:58:44.168361 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:58:44.168389 systemd[1]: Detected virtualization google. Jul 15 23:58:44.168412 systemd[1]: Detected architecture x86-64. Jul 15 23:58:44.168435 systemd[1]: Detected first boot. Jul 15 23:58:44.168459 systemd[1]: Initializing machine ID from random generator. Jul 15 23:58:44.168482 zram_generator::config[1090]: No configuration found. Jul 15 23:58:44.168511 kernel: Guest personality initialized and is inactive Jul 15 23:58:44.168532 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 23:58:44.168552 kernel: Initialized host personality Jul 15 23:58:44.168574 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:58:44.168596 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:58:44.168620 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:58:44.168643 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:58:44.168670 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:58:44.168693 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:58:44.168716 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:58:44.168739 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:58:44.168765 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:58:44.168788 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:58:44.168912 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:58:44.168944 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:58:44.168969 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:58:44.168991 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:58:44.169015 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:58:44.169038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:58:44.169061 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:58:44.169084 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:58:44.169109 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:58:44.169139 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:58:44.169168 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 23:58:44.169192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:58:44.169216 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:58:44.169239 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:58:44.169263 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:58:44.169288 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:58:44.169312 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:58:44.169341 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:58:44.169365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:58:44.169388 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:58:44.169413 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:58:44.169437 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:58:44.169460 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:58:44.169484 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:58:44.169514 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:58:44.169538 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:58:44.169562 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:58:44.169587 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:58:44.169611 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:58:44.169635 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:58:44.169663 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:58:44.169688 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:58:44.169712 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:58:44.169736 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:58:44.169762 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:58:44.169787 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:58:44.171655 systemd[1]: Reached target machines.target - Containers. Jul 15 23:58:44.171685 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:58:44.171718 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:58:44.171743 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:58:44.171768 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:58:44.174344 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:58:44.174394 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:58:44.174421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:58:44.174448 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:58:44.174473 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:58:44.174499 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:58:44.174532 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:58:44.174557 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:58:44.174582 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:58:44.174606 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:58:44.174660 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:58:44.174684 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:58:44.174709 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:58:44.174734 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:58:44.174764 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:58:44.174789 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:58:44.174843 kernel: fuse: init (API version 7.41) Jul 15 23:58:44.174868 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:58:44.174892 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:58:44.174917 systemd[1]: Stopped verity-setup.service. Jul 15 23:58:44.174942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:58:44.174967 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:58:44.174996 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:58:44.175019 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:58:44.175045 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:58:44.175070 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:58:44.175096 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:58:44.175121 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:58:44.175146 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:58:44.175171 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:58:44.175195 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:58:44.175224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:58:44.175249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:58:44.175274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:58:44.175298 kernel: loop: module loaded Jul 15 23:58:44.175322 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:58:44.175347 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:58:44.175371 kernel: ACPI: bus type drm_connector registered Jul 15 23:58:44.175394 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:58:44.175423 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:58:44.175447 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:58:44.175471 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:58:44.175496 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:58:44.175520 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:58:44.175545 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:58:44.175570 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:58:44.175655 systemd-journald[1161]: Collecting audit messages is disabled. Jul 15 23:58:44.175714 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:58:44.175741 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:58:44.175767 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:58:44.176931 systemd-journald[1161]: Journal started Jul 15 23:58:44.176991 systemd-journald[1161]: Runtime Journal (/run/log/journal/82a0c9a532bf45d09f18897848549d7f) is 8M, max 148.9M, 140.9M free. Jul 15 23:58:43.579926 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:58:43.603747 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 23:58:43.604371 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:58:44.185849 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:58:44.193068 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:58:44.193141 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:58:44.200652 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:58:44.215837 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:58:44.221866 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:58:44.229410 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:58:44.234293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:58:44.239892 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:58:44.244198 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:58:44.250841 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:58:44.270112 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:58:44.281830 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:58:44.289828 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:58:44.298268 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:58:44.302147 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:58:44.307491 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:58:44.369566 kernel: loop0: detected capacity change from 0 to 221472 Jul 15 23:58:44.374613 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:58:44.389706 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:58:44.398139 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:58:44.406266 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:58:44.444480 systemd-journald[1161]: Time spent on flushing to /var/log/journal/82a0c9a532bf45d09f18897848549d7f is 90.587ms for 962 entries. Jul 15 23:58:44.444480 systemd-journald[1161]: System Journal (/var/log/journal/82a0c9a532bf45d09f18897848549d7f) is 8M, max 584.8M, 576.8M free. Jul 15 23:58:44.576734 systemd-journald[1161]: Received client request to flush runtime journal. Jul 15 23:58:44.577518 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:58:44.577580 kernel: loop1: detected capacity change from 0 to 52072 Jul 15 23:58:44.457059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:58:44.472194 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:58:44.481108 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:58:44.487885 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:58:44.581759 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:58:44.582552 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Jul 15 23:58:44.582580 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Jul 15 23:58:44.598845 kernel: loop2: detected capacity change from 0 to 146240 Jul 15 23:58:44.602549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:58:44.610408 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:58:44.700967 kernel: loop3: detected capacity change from 0 to 113872 Jul 15 23:58:44.779860 kernel: loop4: detected capacity change from 0 to 221472 Jul 15 23:58:44.832082 kernel: loop5: detected capacity change from 0 to 52072 Jul 15 23:58:44.864886 kernel: loop6: detected capacity change from 0 to 146240 Jul 15 23:58:44.915837 kernel: loop7: detected capacity change from 0 to 113872 Jul 15 23:58:44.967168 (sd-merge)[1236]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jul 15 23:58:44.969434 (sd-merge)[1236]: Merged extensions into '/usr'. Jul 15 23:58:44.988504 systemd[1]: Reload requested from client PID 1193 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:58:44.988683 systemd[1]: Reloading... Jul 15 23:58:45.141846 zram_generator::config[1258]: No configuration found. Jul 15 23:58:45.381094 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:58:45.457834 ldconfig[1189]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:58:45.600374 systemd[1]: Reloading finished in 610 ms. Jul 15 23:58:45.616405 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:58:45.620432 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:58:45.640008 systemd[1]: Starting ensure-sysext.service... Jul 15 23:58:45.642758 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:58:45.683964 systemd[1]: Reload requested from client PID 1302 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:58:45.683987 systemd[1]: Reloading... Jul 15 23:58:45.709967 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:58:45.711451 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:58:45.712573 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:58:45.715587 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:58:45.719753 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:58:45.722972 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jul 15 23:58:45.723088 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jul 15 23:58:45.735408 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:58:45.735559 systemd-tmpfiles[1303]: Skipping /boot Jul 15 23:58:45.780678 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:58:45.782855 systemd-tmpfiles[1303]: Skipping /boot Jul 15 23:58:45.847835 zram_generator::config[1330]: No configuration found. Jul 15 23:58:45.982590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:58:46.098632 systemd[1]: Reloading finished in 413 ms. Jul 15 23:58:46.121302 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:58:46.141069 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:58:46.157952 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:58:46.177542 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:58:46.195721 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:58:46.212009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:58:46.225386 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:58:46.236578 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:58:46.255126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:58:46.256672 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:58:46.262128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:58:46.275985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:58:46.278830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:58:46.297223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:58:46.297483 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:58:46.302416 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:58:46.306033 augenrules[1400]: No rules Jul 15 23:58:46.310919 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:58:46.316634 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:58:46.321235 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:58:46.331580 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:58:46.343303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:58:46.343886 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:58:46.355642 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Jul 15 23:58:46.355730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:58:46.356051 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:58:46.366674 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:58:46.368854 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:58:46.381574 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:58:46.411382 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:58:46.424077 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:58:46.433663 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:58:46.459033 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:58:46.464248 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:58:46.472241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:58:46.475930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:58:46.489359 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:58:46.505941 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:58:46.517865 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:58:46.531943 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 23:58:46.539072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:58:46.539311 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:58:46.546267 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:58:46.555870 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:58:46.572986 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:58:46.581972 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:58:46.582196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:58:46.587532 systemd-resolved[1385]: Positive Trust Anchors: Jul 15 23:58:46.587550 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:58:46.587626 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:58:46.588416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:58:46.589104 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:58:46.597780 systemd-resolved[1385]: Defaulting to hostname 'linux'. Jul 15 23:58:46.599793 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:58:46.601195 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:58:46.610437 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:58:46.614023 augenrules[1425]: /sbin/augenrules: No change Jul 15 23:58:46.621730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:58:46.622327 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:58:46.633694 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:58:46.634013 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:58:46.646340 augenrules[1464]: No rules Jul 15 23:58:46.653081 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:58:46.653455 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:58:46.662737 systemd[1]: Finished ensure-sysext.service. Jul 15 23:58:46.699724 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 23:58:46.709997 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:58:46.743845 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:58:46.757782 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jul 15 23:58:46.766966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:58:46.767044 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:58:46.776337 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:58:46.788208 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:58:46.797972 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 23:58:46.808200 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:58:46.817194 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:58:46.827984 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:58:46.839112 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:58:46.839177 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:58:46.846961 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:58:46.858246 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:58:46.871879 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:58:46.888919 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:58:46.899209 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:58:46.909976 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:58:46.920450 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:58:46.932262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:58:46.939899 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jul 15 23:58:46.950202 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:58:46.967829 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Jul 15 23:58:46.973835 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 23:58:46.976983 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jul 15 23:58:46.988829 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 23:58:47.009062 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:58:47.018027 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:58:47.026982 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:58:47.035093 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:58:47.035365 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:58:47.041123 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 23:58:47.045272 systemd-networkd[1448]: lo: Link UP Jul 15 23:58:47.045286 systemd-networkd[1448]: lo: Gained carrier Jul 15 23:58:47.056144 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:58:47.058151 systemd-networkd[1448]: Enumeration completed Jul 15 23:58:47.060362 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:58:47.060376 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:58:47.063237 systemd-networkd[1448]: eth0: Link UP Jul 15 23:58:47.066135 systemd-networkd[1448]: eth0: Gained carrier Jul 15 23:58:47.066305 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:58:47.068225 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:58:47.082042 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:58:47.084472 systemd-networkd[1448]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f.c.flatcar-212911.internal' to 'ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f' Jul 15 23:58:47.084494 systemd-networkd[1448]: eth0: DHCPv4 address 10.128.0.76/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 15 23:58:47.086765 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:58:47.108312 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:58:47.124142 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 23:58:47.137308 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:58:47.151214 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 23:58:47.158979 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing passwd entry cache Jul 15 23:58:47.154712 oslogin_cache_refresh[1514]: Refreshing passwd entry cache Jul 15 23:58:47.166861 jq[1511]: false Jul 15 23:58:47.166145 oslogin_cache_refresh[1514]: Failure getting users, quitting Jul 15 23:58:47.164089 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:58:47.167447 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting users, quitting Jul 15 23:58:47.167447 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:58:47.167447 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing group entry cache Jul 15 23:58:47.166170 oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:58:47.166250 oslogin_cache_refresh[1514]: Refreshing group entry cache Jul 15 23:58:47.176360 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting groups, quitting Jul 15 23:58:47.176989 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:58:47.178781 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:58:47.177086 oslogin_cache_refresh[1514]: Failure getting groups, quitting Jul 15 23:58:47.177112 oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:58:47.192097 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:58:47.209688 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 15 23:58:47.228407 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:58:47.241024 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jul 15 23:58:47.244539 extend-filesystems[1512]: Found /dev/sda6 Jul 15 23:58:47.242210 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:58:47.244301 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:58:47.256399 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:58:47.272844 extend-filesystems[1512]: Found /dev/sda9 Jul 15 23:58:47.336872 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 15 23:58:47.275551 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:58:47.337072 extend-filesystems[1512]: Checking size of /dev/sda9 Jul 15 23:58:47.385373 kernel: ACPI: button: Power Button [PWRF] Jul 15 23:58:47.286076 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:58:47.297392 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:58:47.297741 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:58:47.298268 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 23:58:47.298636 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 23:58:47.308513 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:58:47.308884 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:58:47.364567 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:58:47.366904 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:58:47.404826 extend-filesystems[1512]: Resized partition /dev/sda9 Jul 15 23:58:47.426578 extend-filesystems[1555]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:58:47.438007 update_engine[1534]: I20250715 23:58:47.417709 1534 main.cc:92] Flatcar Update Engine starting Jul 15 23:58:47.446534 systemd[1]: Reached target network.target - Network. Jul 15 23:58:47.453241 coreos-metadata[1508]: Jul 15 23:58:47.451 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jul 15 23:58:47.458678 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:58:47.476220 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jul 15 23:58:47.476331 coreos-metadata[1508]: Jul 15 23:58:47.473 INFO Fetch successful Jul 15 23:58:47.476331 coreos-metadata[1508]: Jul 15 23:58:47.473 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jul 15 23:58:47.476331 coreos-metadata[1508]: Jul 15 23:58:47.475 INFO Fetch successful Jul 15 23:58:47.476331 coreos-metadata[1508]: Jul 15 23:58:47.475 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jul 15 23:58:47.477765 coreos-metadata[1508]: Jul 15 23:58:47.477 INFO Fetch successful Jul 15 23:58:47.477765 coreos-metadata[1508]: Jul 15 23:58:47.477 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jul 15 23:58:47.485567 jq[1535]: true Jul 15 23:58:47.485957 coreos-metadata[1508]: Jul 15 23:58:47.483 INFO Fetch successful Jul 15 23:58:47.498708 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:58:47.518841 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 15 23:58:47.522168 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:22 UTC 2025 (1): Starting Jul 15 23:58:47.522168 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:58:47.522168 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: ---------------------------------------------------- Jul 15 23:58:47.522168 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:58:47.522168 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:58:47.522168 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: corporation. Support and training for ntp-4 are Jul 15 23:58:47.522168 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: available at https://www.nwtime.org/support Jul 15 23:58:47.522168 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: ---------------------------------------------------- Jul 15 23:58:47.520489 ntpd[1518]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:22 UTC 2025 (1): Starting Jul 15 23:58:47.526155 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:58:47.520520 ntpd[1518]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:58:47.531028 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: proto: precision = 0.091 usec (-23) Jul 15 23:58:47.520542 ntpd[1518]: ---------------------------------------------------- Jul 15 23:58:47.520555 ntpd[1518]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:58:47.520569 ntpd[1518]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:58:47.520582 ntpd[1518]: corporation. Support and training for ntp-4 are Jul 15 23:58:47.520594 ntpd[1518]: available at https://www.nwtime.org/support Jul 15 23:58:47.520607 ntpd[1518]: ---------------------------------------------------- Jul 15 23:58:47.530277 ntpd[1518]: proto: precision = 0.091 usec (-23) Jul 15 23:58:47.542876 kernel: ACPI: button: Sleep Button [SLPF] Jul 15 23:58:47.542969 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: basedate set to 2025-07-03 Jul 15 23:58:47.542969 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: gps base set to 2025-07-06 (week 2374) Jul 15 23:58:47.539495 ntpd[1518]: basedate set to 2025-07-03 Jul 15 23:58:47.539523 ntpd[1518]: gps base set to 2025-07-06 (week 2374) Jul 15 23:58:47.552461 kernel: EDAC MC: Ver: 3.0.0 Jul 15 23:58:47.586051 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jul 15 23:58:47.654984 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:58:47.654984 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:58:47.654984 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:58:47.654984 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: Listen normally on 3 eth0 10.128.0.76:123 Jul 15 23:58:47.654984 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: Listen normally on 4 lo [::1]:123 Jul 15 23:58:47.654984 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: bind(21) AF_INET6 fe80::4001:aff:fe80:4c%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:58:47.654984 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:4c%2#123 Jul 15 23:58:47.654984 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: failed to init interface for address fe80::4001:aff:fe80:4c%2 Jul 15 23:58:47.654984 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: Listening on routing socket on fd #21 for interface updates Jul 15 23:58:47.654984 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:58:47.654984 ntpd[1518]: 15 Jul 23:58:47 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:58:47.580487 ntpd[1518]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:58:47.602738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:58:47.580552 ntpd[1518]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:58:47.630329 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 15 23:58:47.580768 ntpd[1518]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:58:47.593429 ntpd[1518]: Listen normally on 3 eth0 10.128.0.76:123 Jul 15 23:58:47.593561 ntpd[1518]: Listen normally on 4 lo [::1]:123 Jul 15 23:58:47.593652 ntpd[1518]: bind(21) AF_INET6 fe80::4001:aff:fe80:4c%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:58:47.593693 ntpd[1518]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:4c%2#123 Jul 15 23:58:47.593718 ntpd[1518]: failed to init interface for address fe80::4001:aff:fe80:4c%2 Jul 15 23:58:47.593781 ntpd[1518]: Listening on routing socket on fd #21 for interface updates Jul 15 23:58:47.615144 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:58:47.615188 ntpd[1518]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:58:47.663111 jq[1570]: true Jul 15 23:58:47.663415 extend-filesystems[1555]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 15 23:58:47.663415 extend-filesystems[1555]: old_desc_blocks = 1, new_desc_blocks = 2 Jul 15 23:58:47.663415 extend-filesystems[1555]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jul 15 23:58:47.666133 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:58:47.724194 extend-filesystems[1512]: Resized filesystem in /dev/sda9 Jul 15 23:58:47.693928 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:58:47.694328 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:58:47.727621 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 23:58:47.739981 tar[1539]: linux-amd64/helm Jul 15 23:58:47.741717 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:58:47.769843 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:58:47.846478 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:58:47.924488 dbus-daemon[1509]: [system] SELinux support is enabled Jul 15 23:58:47.924916 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:58:47.932880 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:58:47.932935 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:58:47.933083 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:58:47.933119 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:58:47.959029 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:58:47.969710 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1448 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 23:58:47.980253 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 23:58:47.986522 update_engine[1534]: I20250715 23:58:47.986452 1534 update_check_scheduler.cc:74] Next update check in 2m22s Jul 15 23:58:47.991967 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:58:48.039463 bash[1611]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:58:48.047457 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:58:48.085483 systemd[1]: Starting sshkeys.service... Jul 15 23:58:48.096093 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:58:48.189580 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 23:58:48.192906 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 23:58:48.342605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:58:48.437115 coreos-metadata[1617]: Jul 15 23:58:48.437 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jul 15 23:58:48.441014 coreos-metadata[1617]: Jul 15 23:58:48.440 INFO Fetch failed with 404: resource not found Jul 15 23:58:48.441141 coreos-metadata[1617]: Jul 15 23:58:48.441 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jul 15 23:58:48.442028 coreos-metadata[1617]: Jul 15 23:58:48.442 INFO Fetch successful Jul 15 23:58:48.442151 coreos-metadata[1617]: Jul 15 23:58:48.442 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jul 15 23:58:48.445144 coreos-metadata[1617]: Jul 15 23:58:48.445 INFO Fetch failed with 404: resource not found Jul 15 23:58:48.445237 coreos-metadata[1617]: Jul 15 23:58:48.445 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jul 15 23:58:48.447084 coreos-metadata[1617]: Jul 15 23:58:48.447 INFO Fetch failed with 404: resource not found Jul 15 23:58:48.447247 coreos-metadata[1617]: Jul 15 23:58:48.447 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jul 15 23:58:48.449483 coreos-metadata[1617]: Jul 15 23:58:48.449 INFO Fetch successful Jul 15 23:58:48.455878 unknown[1617]: wrote ssh authorized keys file for user: core Jul 15 23:58:48.522271 ntpd[1518]: bind(24) AF_INET6 fe80::4001:aff:fe80:4c%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:58:48.544943 ntpd[1518]: 15 Jul 23:58:48 ntpd[1518]: bind(24) AF_INET6 fe80::4001:aff:fe80:4c%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:58:48.544943 ntpd[1518]: 15 Jul 23:58:48 ntpd[1518]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:4c%2#123 Jul 15 23:58:48.544943 ntpd[1518]: 15 Jul 23:58:48 ntpd[1518]: failed to init interface for address fe80::4001:aff:fe80:4c%2 Jul 15 23:58:48.526344 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:58:48.522326 ntpd[1518]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:4c%2#123 Jul 15 23:58:48.522356 ntpd[1518]: failed to init interface for address fe80::4001:aff:fe80:4c%2 Jul 15 23:58:48.552315 update-ssh-keys[1630]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:58:48.550962 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 23:58:48.568018 systemd[1]: Finished sshkeys.service. Jul 15 23:58:48.713302 systemd-logind[1527]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 23:58:48.713340 systemd-logind[1527]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 15 23:58:48.713382 systemd-logind[1527]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 23:58:48.713719 systemd-logind[1527]: New seat seat0. Jul 15 23:58:48.714985 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:58:48.866315 containerd[1585]: time="2025-07-15T23:58:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:58:48.868908 containerd[1585]: time="2025-07-15T23:58:48.867350855Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:58:48.929956 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 23:58:48.930653 containerd[1585]: time="2025-07-15T23:58:48.930558273Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="20.19µs" Jul 15 23:58:48.930653 containerd[1585]: time="2025-07-15T23:58:48.930602896Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:58:48.930653 containerd[1585]: time="2025-07-15T23:58:48.930632396Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:58:48.931502 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 23:58:48.932679 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1610 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 23:58:48.934490 containerd[1585]: time="2025-07-15T23:58:48.934453790Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:58:48.934571 containerd[1585]: time="2025-07-15T23:58:48.934499630Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:58:48.934571 containerd[1585]: time="2025-07-15T23:58:48.934542781Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:58:48.934656 containerd[1585]: time="2025-07-15T23:58:48.934636014Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:58:48.934703 containerd[1585]: time="2025-07-15T23:58:48.934654388Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:58:48.935057 containerd[1585]: time="2025-07-15T23:58:48.935021552Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:58:48.935149 containerd[1585]: time="2025-07-15T23:58:48.935055716Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:58:48.935149 containerd[1585]: time="2025-07-15T23:58:48.935075503Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:58:48.935149 containerd[1585]: time="2025-07-15T23:58:48.935089490Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:58:48.935287 containerd[1585]: time="2025-07-15T23:58:48.935214088Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:58:48.935548 containerd[1585]: time="2025-07-15T23:58:48.935515632Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:58:48.935612 containerd[1585]: time="2025-07-15T23:58:48.935578134Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:58:48.935612 containerd[1585]: time="2025-07-15T23:58:48.935597404Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:58:48.935856 containerd[1585]: time="2025-07-15T23:58:48.935666660Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:58:48.936244 containerd[1585]: time="2025-07-15T23:58:48.936217549Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:58:48.936580 containerd[1585]: time="2025-07-15T23:58:48.936323537Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:58:48.949302 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 23:58:48.955122 containerd[1585]: time="2025-07-15T23:58:48.955077190Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:58:48.955286 containerd[1585]: time="2025-07-15T23:58:48.955154827Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:58:48.955286 containerd[1585]: time="2025-07-15T23:58:48.955180704Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:58:48.955286 containerd[1585]: time="2025-07-15T23:58:48.955199653Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:58:48.955286 containerd[1585]: time="2025-07-15T23:58:48.955220637Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:58:48.955286 containerd[1585]: time="2025-07-15T23:58:48.955237654Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:58:48.955286 containerd[1585]: time="2025-07-15T23:58:48.955270644Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955298217Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955328166Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955470294Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955490718Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955518845Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955677562Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955708680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955746455Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955776611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955819757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955838712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955856147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955871104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955888159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:58:48.956013 containerd[1585]: time="2025-07-15T23:58:48.955904096Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:58:48.956776 containerd[1585]: time="2025-07-15T23:58:48.955933225Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:58:48.956776 containerd[1585]: time="2025-07-15T23:58:48.956043821Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:58:48.956776 containerd[1585]: time="2025-07-15T23:58:48.956066355Z" level=info msg="Start snapshots syncer" Jul 15 23:58:48.956776 containerd[1585]: time="2025-07-15T23:58:48.956107129Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:58:48.959658 containerd[1585]: time="2025-07-15T23:58:48.956488254Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:58:48.959658 containerd[1585]: time="2025-07-15T23:58:48.956573970Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:58:48.959658 containerd[1585]: time="2025-07-15T23:58:48.956694786Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:58:48.959658 containerd[1585]: time="2025-07-15T23:58:48.959631581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:58:48.960916 containerd[1585]: time="2025-07-15T23:58:48.960869876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:58:48.960992 containerd[1585]: time="2025-07-15T23:58:48.960928577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:58:48.960992 containerd[1585]: time="2025-07-15T23:58:48.960953067Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:58:48.961093 containerd[1585]: time="2025-07-15T23:58:48.960974301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:58:48.961093 containerd[1585]: time="2025-07-15T23:58:48.961013308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:58:48.961093 containerd[1585]: time="2025-07-15T23:58:48.961035050Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:58:48.961766 containerd[1585]: time="2025-07-15T23:58:48.961095008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:58:48.961766 containerd[1585]: time="2025-07-15T23:58:48.961115816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:58:48.961766 containerd[1585]: time="2025-07-15T23:58:48.961135310Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:58:48.961766 containerd[1585]: time="2025-07-15T23:58:48.961255608Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:58:48.961766 containerd[1585]: time="2025-07-15T23:58:48.961287974Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:58:48.961766 containerd[1585]: time="2025-07-15T23:58:48.961302630Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:58:48.962878 containerd[1585]: time="2025-07-15T23:58:48.962843633Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:58:48.962966 containerd[1585]: time="2025-07-15T23:58:48.962878736Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:58:48.962966 containerd[1585]: time="2025-07-15T23:58:48.962922637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:58:48.962966 containerd[1585]: time="2025-07-15T23:58:48.962943688Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:58:48.963100 containerd[1585]: time="2025-07-15T23:58:48.962970658Z" level=info msg="runtime interface created" Jul 15 23:58:48.963100 containerd[1585]: time="2025-07-15T23:58:48.963000760Z" level=info msg="created NRI interface" Jul 15 23:58:48.963100 containerd[1585]: time="2025-07-15T23:58:48.963016185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:58:48.963100 containerd[1585]: time="2025-07-15T23:58:48.963037621Z" level=info msg="Connect containerd service" Jul 15 23:58:48.963561 containerd[1585]: time="2025-07-15T23:58:48.963106182Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:58:48.970875 containerd[1585]: time="2025-07-15T23:58:48.969100657Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:58:49.048733 sshd_keygen[1562]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:58:49.085900 systemd-networkd[1448]: eth0: Gained IPv6LL Jul 15 23:58:49.093789 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:58:49.104990 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:58:49.115481 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:58:49.128296 tar[1539]: linux-amd64/LICENSE Jul 15 23:58:49.129204 tar[1539]: linux-amd64/README.md Jul 15 23:58:49.152455 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:58:49.162349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:58:49.176378 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:58:49.189468 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jul 15 23:58:49.225767 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:58:49.226124 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:58:49.239071 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:58:49.251730 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:58:49.250182 polkitd[1638]: Started polkitd version 126 Jul 15 23:58:49.268131 init.sh[1659]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jul 15 23:58:49.268131 init.sh[1659]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jul 15 23:58:49.274508 init.sh[1659]: + /usr/bin/google_instance_setup Jul 15 23:58:49.274669 polkitd[1638]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 23:58:49.275387 polkitd[1638]: Loading rules from directory /run/polkit-1/rules.d Jul 15 23:58:49.275465 polkitd[1638]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:58:49.277472 polkitd[1638]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 23:58:49.277529 polkitd[1638]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:58:49.277588 polkitd[1638]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 23:58:49.280672 polkitd[1638]: Finished loading, compiling and executing 2 rules Jul 15 23:58:49.281060 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 23:58:49.284116 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 23:58:49.288119 polkitd[1638]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 23:58:49.331482 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:58:49.341662 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:58:49.359338 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:58:49.372665 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 23:58:49.382278 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:58:49.405054 systemd-hostnamed[1610]: Hostname set to (transient) Jul 15 23:58:49.406741 systemd-resolved[1385]: System hostname changed to 'ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f'. Jul 15 23:58:49.407834 containerd[1585]: time="2025-07-15T23:58:49.407569748Z" level=info msg="Start subscribing containerd event" Jul 15 23:58:49.407834 containerd[1585]: time="2025-07-15T23:58:49.407640784Z" level=info msg="Start recovering state" Jul 15 23:58:49.407834 containerd[1585]: time="2025-07-15T23:58:49.407786812Z" level=info msg="Start event monitor" Jul 15 23:58:49.408409 containerd[1585]: time="2025-07-15T23:58:49.408062828Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:58:49.408409 containerd[1585]: time="2025-07-15T23:58:49.408095053Z" level=info msg="Start streaming server" Jul 15 23:58:49.408409 containerd[1585]: time="2025-07-15T23:58:49.408118257Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:58:49.408409 containerd[1585]: time="2025-07-15T23:58:49.408130080Z" level=info msg="runtime interface starting up..." Jul 15 23:58:49.408409 containerd[1585]: time="2025-07-15T23:58:49.408139845Z" level=info msg="starting plugins..." Jul 15 23:58:49.408409 containerd[1585]: time="2025-07-15T23:58:49.408165499Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:58:49.409786 containerd[1585]: time="2025-07-15T23:58:49.409599692Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:58:49.410159 containerd[1585]: time="2025-07-15T23:58:49.410097623Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:58:49.410658 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:58:49.411065 containerd[1585]: time="2025-07-15T23:58:49.410941987Z" level=info msg="containerd successfully booted in 0.549615s" Jul 15 23:58:49.598462 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:58:49.611193 systemd[1]: Started sshd@0-10.128.0.76:22-139.178.89.65:48812.service - OpenSSH per-connection server daemon (139.178.89.65:48812). Jul 15 23:58:49.912086 instance-setup[1675]: INFO Running google_set_multiqueue. Jul 15 23:58:49.939094 instance-setup[1675]: INFO Set channels for eth0 to 2. Jul 15 23:58:49.945718 instance-setup[1675]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jul 15 23:58:49.947603 instance-setup[1675]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jul 15 23:58:49.948270 instance-setup[1675]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jul 15 23:58:49.950748 instance-setup[1675]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jul 15 23:58:49.951031 instance-setup[1675]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jul 15 23:58:49.952419 instance-setup[1675]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jul 15 23:58:49.953881 instance-setup[1675]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jul 15 23:58:49.954597 instance-setup[1675]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jul 15 23:58:49.964970 instance-setup[1675]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 15 23:58:49.969489 instance-setup[1675]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 15 23:58:49.971627 instance-setup[1675]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jul 15 23:58:49.971685 instance-setup[1675]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jul 15 23:58:49.993915 init.sh[1659]: + /usr/bin/google_metadata_script_runner --script-type startup Jul 15 23:58:50.006426 sshd[1693]: Accepted publickey for core from 139.178.89.65 port 48812 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:58:50.015984 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:50.032599 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:58:50.045672 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:58:50.084787 systemd-logind[1527]: New session 1 of user core. Jul 15 23:58:50.102624 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:58:50.122634 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:58:50.164422 (systemd)[1728]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:58:50.172104 systemd-logind[1527]: New session c1 of user core. Jul 15 23:58:50.241283 startup-script[1725]: INFO Starting startup scripts. Jul 15 23:58:50.248922 startup-script[1725]: INFO No startup scripts found in metadata. Jul 15 23:58:50.249142 startup-script[1725]: INFO Finished running startup scripts. Jul 15 23:58:50.299599 init.sh[1659]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jul 15 23:58:50.299599 init.sh[1659]: + daemon_pids=() Jul 15 23:58:50.299599 init.sh[1659]: + for d in accounts clock_skew network Jul 15 23:58:50.299599 init.sh[1659]: + daemon_pids+=($!) Jul 15 23:58:50.299599 init.sh[1659]: + for d in accounts clock_skew network Jul 15 23:58:50.299599 init.sh[1659]: + daemon_pids+=($!) Jul 15 23:58:50.299599 init.sh[1659]: + for d in accounts clock_skew network Jul 15 23:58:50.299599 init.sh[1659]: + daemon_pids+=($!) Jul 15 23:58:50.299599 init.sh[1659]: + NOTIFY_SOCKET=/run/systemd/notify Jul 15 23:58:50.299599 init.sh[1659]: + /usr/bin/systemd-notify --ready Jul 15 23:58:50.301576 init.sh[1737]: + /usr/bin/google_clock_skew_daemon Jul 15 23:58:50.301986 init.sh[1738]: + /usr/bin/google_network_daemon Jul 15 23:58:50.304960 init.sh[1736]: + /usr/bin/google_accounts_daemon Jul 15 23:58:50.351612 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jul 15 23:58:50.375039 init.sh[1659]: + wait -n 1736 1737 1738 Jul 15 23:58:50.524754 systemd[1728]: Queued start job for default target default.target. Jul 15 23:58:50.531527 systemd[1728]: Created slice app.slice - User Application Slice. Jul 15 23:58:50.531586 systemd[1728]: Reached target paths.target - Paths. Jul 15 23:58:50.531663 systemd[1728]: Reached target timers.target - Timers. Jul 15 23:58:50.536964 systemd[1728]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:58:50.568151 systemd[1728]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:58:50.570162 systemd[1728]: Reached target sockets.target - Sockets. Jul 15 23:58:50.570274 systemd[1728]: Reached target basic.target - Basic System. Jul 15 23:58:50.570354 systemd[1728]: Reached target default.target - Main User Target. Jul 15 23:58:50.570408 systemd[1728]: Startup finished in 382ms. Jul 15 23:58:50.570530 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:58:50.588038 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:58:50.850249 systemd[1]: Started sshd@1-10.128.0.76:22-139.178.89.65:48820.service - OpenSSH per-connection server daemon (139.178.89.65:48820). Jul 15 23:58:50.937118 groupadd[1750]: group added to /etc/group: name=google-sudoers, GID=1000 Jul 15 23:58:50.946234 groupadd[1750]: group added to /etc/gshadow: name=google-sudoers Jul 15 23:58:50.980321 google-clock-skew[1737]: INFO Starting Google Clock Skew daemon. Jul 15 23:58:50.989275 google-networking[1738]: INFO Starting Google Networking daemon. Jul 15 23:58:50.990591 google-clock-skew[1737]: INFO Clock drift token has changed: 0. Jul 15 23:58:51.027830 groupadd[1750]: new group: name=google-sudoers, GID=1000 Jul 15 23:58:51.058947 google-accounts[1736]: INFO Starting Google Accounts daemon. Jul 15 23:58:51.072255 google-accounts[1736]: WARNING OS Login not installed. Jul 15 23:58:51.074583 google-accounts[1736]: INFO Creating a new user account for 0. Jul 15 23:58:51.081630 init.sh[1763]: useradd: invalid user name '0': use --badname to ignore Jul 15 23:58:51.082038 google-accounts[1736]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jul 15 23:58:51.201560 sshd[1752]: Accepted publickey for core from 139.178.89.65 port 48820 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:58:51.203632 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:51.212663 systemd-logind[1527]: New session 2 of user core. Jul 15 23:58:51.216721 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:58:51.000308 systemd-resolved[1385]: Clock change detected. Flushing caches. Jul 15 23:58:51.015715 systemd-journald[1161]: Time jumped backwards, rotating. Jul 15 23:58:51.005752 google-clock-skew[1737]: INFO Synced system time with hardware clock. Jul 15 23:58:51.008200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:58:51.021703 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:58:51.028519 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:58:51.030829 systemd[1]: Startup finished in 4.329s (kernel) + 7.860s (initrd) + 8.633s (userspace) = 20.822s. Jul 15 23:58:51.182118 sshd[1769]: Connection closed by 139.178.89.65 port 48820 Jul 15 23:58:51.182925 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:51.189820 systemd[1]: sshd@1-10.128.0.76:22-139.178.89.65:48820.service: Deactivated successfully. Jul 15 23:58:51.193006 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:58:51.194767 systemd-logind[1527]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:58:51.196994 systemd-logind[1527]: Removed session 2. Jul 15 23:58:51.236338 systemd[1]: Started sshd@2-10.128.0.76:22-139.178.89.65:48836.service - OpenSSH per-connection server daemon (139.178.89.65:48836). Jul 15 23:58:51.280341 ntpd[1518]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:4c%2]:123 Jul 15 23:58:51.280833 ntpd[1518]: 15 Jul 23:58:51 ntpd[1518]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:4c%2]:123 Jul 15 23:58:51.549014 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 48836 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:58:51.550790 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:51.559916 systemd-logind[1527]: New session 3 of user core. Jul 15 23:58:51.562374 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:58:51.759785 sshd[1788]: Connection closed by 139.178.89.65 port 48836 Jul 15 23:58:51.760611 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:51.766596 systemd[1]: sshd@2-10.128.0.76:22-139.178.89.65:48836.service: Deactivated successfully. Jul 15 23:58:51.769413 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:58:51.771724 systemd-logind[1527]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:58:51.774430 systemd-logind[1527]: Removed session 3. Jul 15 23:58:51.813393 systemd[1]: Started sshd@3-10.128.0.76:22-139.178.89.65:48844.service - OpenSSH per-connection server daemon (139.178.89.65:48844). Jul 15 23:58:51.966475 kubelet[1770]: E0715 23:58:51.966393 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:58:51.969450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:58:51.969700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:58:51.970271 systemd[1]: kubelet.service: Consumed 1.328s CPU time, 265.3M memory peak. Jul 15 23:58:52.125371 sshd[1794]: Accepted publickey for core from 139.178.89.65 port 48844 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:58:52.127367 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:52.135954 systemd-logind[1527]: New session 4 of user core. Jul 15 23:58:52.145440 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:58:52.342403 sshd[1798]: Connection closed by 139.178.89.65 port 48844 Jul 15 23:58:52.343480 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:52.349634 systemd[1]: sshd@3-10.128.0.76:22-139.178.89.65:48844.service: Deactivated successfully. Jul 15 23:58:52.353062 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:58:52.356433 systemd-logind[1527]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:58:52.358574 systemd-logind[1527]: Removed session 4. Jul 15 23:58:52.399111 systemd[1]: Started sshd@4-10.128.0.76:22-139.178.89.65:48856.service - OpenSSH per-connection server daemon (139.178.89.65:48856). Jul 15 23:58:52.714353 sshd[1804]: Accepted publickey for core from 139.178.89.65 port 48856 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:58:52.716383 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:52.725067 systemd-logind[1527]: New session 5 of user core. Jul 15 23:58:52.734470 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:58:52.915115 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:58:52.915664 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:58:52.935066 sudo[1807]: pam_unix(sudo:session): session closed for user root Jul 15 23:58:52.978952 sshd[1806]: Connection closed by 139.178.89.65 port 48856 Jul 15 23:58:52.979780 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:52.987895 systemd[1]: sshd@4-10.128.0.76:22-139.178.89.65:48856.service: Deactivated successfully. Jul 15 23:58:52.990807 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:58:52.992282 systemd-logind[1527]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:58:52.994759 systemd-logind[1527]: Removed session 5. Jul 15 23:58:53.033739 systemd[1]: Started sshd@5-10.128.0.76:22-139.178.89.65:48862.service - OpenSSH per-connection server daemon (139.178.89.65:48862). Jul 15 23:58:53.357585 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 48862 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:58:53.359532 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:53.367166 systemd-logind[1527]: New session 6 of user core. Jul 15 23:58:53.375342 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:58:53.537708 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:58:53.538351 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:58:53.545655 sudo[1817]: pam_unix(sudo:session): session closed for user root Jul 15 23:58:53.559008 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:58:53.559500 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:58:53.572870 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:58:53.624332 augenrules[1839]: No rules Jul 15 23:58:53.625789 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:58:53.626208 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:58:53.627669 sudo[1816]: pam_unix(sudo:session): session closed for user root Jul 15 23:58:53.670768 sshd[1815]: Connection closed by 139.178.89.65 port 48862 Jul 15 23:58:53.671609 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:53.677550 systemd[1]: sshd@5-10.128.0.76:22-139.178.89.65:48862.service: Deactivated successfully. Jul 15 23:58:53.679802 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:58:53.681059 systemd-logind[1527]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:58:53.682956 systemd-logind[1527]: Removed session 6. Jul 15 23:58:53.725549 systemd[1]: Started sshd@6-10.128.0.76:22-139.178.89.65:48864.service - OpenSSH per-connection server daemon (139.178.89.65:48864). Jul 15 23:58:54.035996 sshd[1848]: Accepted publickey for core from 139.178.89.65 port 48864 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:58:54.037761 sshd-session[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:54.045335 systemd-logind[1527]: New session 7 of user core. Jul 15 23:58:54.048320 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:58:54.217920 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:58:54.218482 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:58:54.770661 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:58:54.802036 (dockerd)[1869]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:58:55.155869 dockerd[1869]: time="2025-07-15T23:58:55.154397022Z" level=info msg="Starting up" Jul 15 23:58:55.159824 dockerd[1869]: time="2025-07-15T23:58:55.159757772Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:58:55.337931 systemd[1]: var-lib-docker-metacopy\x2dcheck1839463059-merged.mount: Deactivated successfully. Jul 15 23:58:55.361147 dockerd[1869]: time="2025-07-15T23:58:55.360771679Z" level=info msg="Loading containers: start." Jul 15 23:58:55.381129 kernel: Initializing XFRM netlink socket Jul 15 23:58:55.722010 systemd-networkd[1448]: docker0: Link UP Jul 15 23:58:55.728716 dockerd[1869]: time="2025-07-15T23:58:55.728653656Z" level=info msg="Loading containers: done." Jul 15 23:58:55.748576 dockerd[1869]: time="2025-07-15T23:58:55.748511033Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:58:55.748758 dockerd[1869]: time="2025-07-15T23:58:55.748623642Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:58:55.748819 dockerd[1869]: time="2025-07-15T23:58:55.748773639Z" level=info msg="Initializing buildkit" Jul 15 23:58:55.782412 dockerd[1869]: time="2025-07-15T23:58:55.782342463Z" level=info msg="Completed buildkit initialization" Jul 15 23:58:55.791545 dockerd[1869]: time="2025-07-15T23:58:55.791472516Z" level=info msg="Daemon has completed initialization" Jul 15 23:58:55.791928 dockerd[1869]: time="2025-07-15T23:58:55.791663427Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:58:55.791819 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:58:56.711891 containerd[1585]: time="2025-07-15T23:58:56.711818552Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Jul 15 23:58:57.288338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount145770240.mount: Deactivated successfully. Jul 15 23:58:58.822114 containerd[1585]: time="2025-07-15T23:58:58.822041948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:58.823598 containerd[1585]: time="2025-07-15T23:58:58.823545278Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28084387" Jul 15 23:58:58.824992 containerd[1585]: time="2025-07-15T23:58:58.824913418Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:58.828496 containerd[1585]: time="2025-07-15T23:58:58.828414081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:58.830121 containerd[1585]: time="2025-07-15T23:58:58.829735488Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 2.117865349s" Jul 15 23:58:58.830121 containerd[1585]: time="2025-07-15T23:58:58.829785851Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Jul 15 23:58:58.830518 containerd[1585]: time="2025-07-15T23:58:58.830477275Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Jul 15 23:59:00.259317 containerd[1585]: time="2025-07-15T23:59:00.259241467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:00.260677 containerd[1585]: time="2025-07-15T23:59:00.260621831Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24715179" Jul 15 23:59:00.262212 containerd[1585]: time="2025-07-15T23:59:00.262144402Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:00.265644 containerd[1585]: time="2025-07-15T23:59:00.265574001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:00.267051 containerd[1585]: time="2025-07-15T23:59:00.266829051Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.436204512s" Jul 15 23:59:00.267051 containerd[1585]: time="2025-07-15T23:59:00.266878203Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Jul 15 23:59:00.267829 containerd[1585]: time="2025-07-15T23:59:00.267785008Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Jul 15 23:59:01.552201 containerd[1585]: time="2025-07-15T23:59:01.552077736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:01.553834 containerd[1585]: time="2025-07-15T23:59:01.553788152Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18785616" Jul 15 23:59:01.555372 containerd[1585]: time="2025-07-15T23:59:01.555195262Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:01.559891 containerd[1585]: time="2025-07-15T23:59:01.559822421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:01.561558 containerd[1585]: time="2025-07-15T23:59:01.561374387Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.293518343s" Jul 15 23:59:01.561558 containerd[1585]: time="2025-07-15T23:59:01.561431064Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Jul 15 23:59:01.562188 containerd[1585]: time="2025-07-15T23:59:01.562159465Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Jul 15 23:59:02.159740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:59:02.163376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:59:02.600665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:59:02.613044 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:59:02.704575 kubelet[2144]: E0715 23:59:02.704374 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:59:02.712477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:59:02.713015 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:59:02.714022 systemd[1]: kubelet.service: Consumed 281ms CPU time, 109.9M memory peak. Jul 15 23:59:02.930138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2478448024.mount: Deactivated successfully. Jul 15 23:59:03.572792 containerd[1585]: time="2025-07-15T23:59:03.572719603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:03.574250 containerd[1585]: time="2025-07-15T23:59:03.574140639Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30385507" Jul 15 23:59:03.575860 containerd[1585]: time="2025-07-15T23:59:03.575785942Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:03.578653 containerd[1585]: time="2025-07-15T23:59:03.578586596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:03.579714 containerd[1585]: time="2025-07-15T23:59:03.579400683Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 2.017073196s" Jul 15 23:59:03.579714 containerd[1585]: time="2025-07-15T23:59:03.579449469Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Jul 15 23:59:03.580399 containerd[1585]: time="2025-07-15T23:59:03.580139395Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 23:59:04.063080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179031347.mount: Deactivated successfully. Jul 15 23:59:05.363399 containerd[1585]: time="2025-07-15T23:59:05.363324149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:05.366913 containerd[1585]: time="2025-07-15T23:59:05.366836260Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Jul 15 23:59:05.367337 containerd[1585]: time="2025-07-15T23:59:05.367301377Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:05.374316 containerd[1585]: time="2025-07-15T23:59:05.374275660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:05.375709 containerd[1585]: time="2025-07-15T23:59:05.375660548Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.795481458s" Jul 15 23:59:05.375805 containerd[1585]: time="2025-07-15T23:59:05.375714354Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 23:59:05.376860 containerd[1585]: time="2025-07-15T23:59:05.376824454Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:59:05.831664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3309610884.mount: Deactivated successfully. Jul 15 23:59:05.841032 containerd[1585]: time="2025-07-15T23:59:05.840965031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:59:05.842162 containerd[1585]: time="2025-07-15T23:59:05.842024675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jul 15 23:59:05.843940 containerd[1585]: time="2025-07-15T23:59:05.843863201Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:59:05.847212 containerd[1585]: time="2025-07-15T23:59:05.847143616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:59:05.848308 containerd[1585]: time="2025-07-15T23:59:05.848263283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 471.396334ms" Jul 15 23:59:05.848393 containerd[1585]: time="2025-07-15T23:59:05.848311698Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 23:59:05.849307 containerd[1585]: time="2025-07-15T23:59:05.849278582Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 23:59:06.303991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount509950798.mount: Deactivated successfully. Jul 15 23:59:08.473568 containerd[1585]: time="2025-07-15T23:59:08.473491937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:08.475017 containerd[1585]: time="2025-07-15T23:59:08.474965651Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786577" Jul 15 23:59:08.476631 containerd[1585]: time="2025-07-15T23:59:08.476540760Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:08.480906 containerd[1585]: time="2025-07-15T23:59:08.480820957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:08.482502 containerd[1585]: time="2025-07-15T23:59:08.482305898Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.632888304s" Jul 15 23:59:08.482502 containerd[1585]: time="2025-07-15T23:59:08.482351540Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 23:59:12.018867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:59:12.019180 systemd[1]: kubelet.service: Consumed 281ms CPU time, 109.9M memory peak. Jul 15 23:59:12.022611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:59:12.071549 systemd[1]: Reload requested from client PID 2293 ('systemctl') (unit session-7.scope)... Jul 15 23:59:12.071571 systemd[1]: Reloading... Jul 15 23:59:12.277139 zram_generator::config[2338]: No configuration found. Jul 15 23:59:12.437790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:59:12.622865 systemd[1]: Reloading finished in 550 ms. Jul 15 23:59:12.673649 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:59:12.673813 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:59:12.674368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:59:12.674541 systemd[1]: kubelet.service: Consumed 155ms CPU time, 97.2M memory peak. Jul 15 23:59:12.677348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:59:13.319453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:59:13.339747 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:59:13.393132 kubelet[2386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:59:13.393132 kubelet[2386]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 23:59:13.393132 kubelet[2386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:59:13.393714 kubelet[2386]: I0715 23:59:13.393261 2386 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:59:14.028170 kubelet[2386]: I0715 23:59:14.028110 2386 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 23:59:14.028170 kubelet[2386]: I0715 23:59:14.028152 2386 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:59:14.028560 kubelet[2386]: I0715 23:59:14.028521 2386 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 23:59:14.069444 kubelet[2386]: E0715 23:59:14.069377 2386 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:59:14.070715 kubelet[2386]: I0715 23:59:14.070670 2386 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:59:14.084579 kubelet[2386]: I0715 23:59:14.084491 2386 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:59:14.090253 kubelet[2386]: I0715 23:59:14.090199 2386 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:59:14.090422 kubelet[2386]: I0715 23:59:14.090399 2386 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 23:59:14.090654 kubelet[2386]: I0715 23:59:14.090599 2386 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:59:14.090878 kubelet[2386]: I0715 23:59:14.090639 2386 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:59:14.091059 kubelet[2386]: I0715 23:59:14.090883 2386 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:59:14.091059 kubelet[2386]: I0715 23:59:14.090901 2386 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 23:59:14.091059 kubelet[2386]: I0715 23:59:14.091053 2386 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:59:14.097896 kubelet[2386]: I0715 23:59:14.097828 2386 kubelet.go:408] "Attempting to sync node with API server" Jul 15 23:59:14.097896 kubelet[2386]: I0715 23:59:14.097872 2386 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:59:14.098045 kubelet[2386]: I0715 23:59:14.097921 2386 kubelet.go:314] "Adding apiserver pod source" Jul 15 23:59:14.098045 kubelet[2386]: I0715 23:59:14.097952 2386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:59:14.104171 kubelet[2386]: W0715 23:59:14.103250 2386 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f&limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Jul 15 23:59:14.104171 kubelet[2386]: E0715 23:59:14.103379 2386 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f&limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:59:14.104171 kubelet[2386]: I0715 23:59:14.103480 2386 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:59:14.104443 kubelet[2386]: I0715 23:59:14.104422 2386 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:59:14.104590 kubelet[2386]: W0715 23:59:14.104576 2386 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:59:14.108707 kubelet[2386]: I0715 23:59:14.108672 2386 server.go:1274] "Started kubelet" Jul 15 23:59:14.111446 kubelet[2386]: W0715 23:59:14.111387 2386 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Jul 15 23:59:14.111556 kubelet[2386]: E0715 23:59:14.111463 2386 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:59:14.113340 kubelet[2386]: I0715 23:59:14.111613 2386 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:59:14.113340 kubelet[2386]: I0715 23:59:14.112037 2386 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:59:14.113340 kubelet[2386]: I0715 23:59:14.112190 2386 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:59:14.114342 kubelet[2386]: I0715 23:59:14.113589 2386 server.go:449] "Adding debug handlers to kubelet server" Jul 15 23:59:14.115943 kubelet[2386]: I0715 23:59:14.115921 2386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:59:14.124663 kubelet[2386]: I0715 23:59:14.124630 2386 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 23:59:14.125041 kubelet[2386]: E0715 23:59:14.125013 2386 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" not found" Jul 15 23:59:14.125612 kubelet[2386]: I0715 23:59:14.125589 2386 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 23:59:14.125830 kubelet[2386]: I0715 23:59:14.125815 2386 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:59:14.126556 kubelet[2386]: E0715 23:59:14.126535 2386 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:59:14.127167 kubelet[2386]: I0715 23:59:14.127145 2386 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:59:14.129966 kubelet[2386]: I0715 23:59:14.127562 2386 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:59:14.130819 kubelet[2386]: I0715 23:59:14.130270 2386 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:59:14.131530 kubelet[2386]: E0715 23:59:14.127782 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f?timeout=10s\": dial tcp 10.128.0.76:6443: connect: connection refused" interval="200ms" Jul 15 23:59:14.131530 kubelet[2386]: W0715 23:59:14.127685 2386 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Jul 15 23:59:14.131670 kubelet[2386]: E0715 23:59:14.131567 2386 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:59:14.134117 kubelet[2386]: E0715 23:59:14.130676 2386 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.76:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f.18529233517c6bf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f,UID:ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f,},FirstTimestamp:2025-07-15 23:59:14.108640246 +0000 UTC m=+0.763498276,LastTimestamp:2025-07-15 23:59:14.108640246 +0000 UTC m=+0.763498276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f,}" Jul 15 23:59:14.134117 kubelet[2386]: I0715 23:59:14.134082 2386 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:59:14.163382 kubelet[2386]: I0715 23:59:14.163315 2386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:59:14.165152 kubelet[2386]: I0715 23:59:14.165045 2386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:59:14.165152 kubelet[2386]: I0715 23:59:14.165077 2386 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 23:59:14.165152 kubelet[2386]: I0715 23:59:14.165129 2386 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 23:59:14.165820 kubelet[2386]: E0715 23:59:14.165204 2386 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:59:14.172361 kubelet[2386]: W0715 23:59:14.172260 2386 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Jul 15 23:59:14.172361 kubelet[2386]: E0715 23:59:14.172315 2386 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:59:14.175310 kubelet[2386]: I0715 23:59:14.175230 2386 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 23:59:14.175310 kubelet[2386]: I0715 23:59:14.175249 2386 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 23:59:14.175310 kubelet[2386]: I0715 23:59:14.175274 2386 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:59:14.179822 kubelet[2386]: I0715 23:59:14.179752 2386 policy_none.go:49] "None policy: Start" Jul 15 23:59:14.180802 kubelet[2386]: I0715 23:59:14.180782 2386 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 23:59:14.181138 kubelet[2386]: I0715 23:59:14.180965 2386 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:59:14.190664 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:59:14.213202 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:59:14.220851 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:59:14.225896 kubelet[2386]: E0715 23:59:14.225856 2386 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" not found" Jul 15 23:59:14.231457 kubelet[2386]: I0715 23:59:14.231398 2386 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:59:14.231717 kubelet[2386]: I0715 23:59:14.231688 2386 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:59:14.231822 kubelet[2386]: I0715 23:59:14.231775 2386 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:59:14.232500 kubelet[2386]: I0715 23:59:14.232466 2386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:59:14.236261 kubelet[2386]: E0715 23:59:14.236230 2386 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" not found" Jul 15 23:59:14.287452 systemd[1]: Created slice kubepods-burstable-pod0c92231c72a3acfb3fdb04074739fecd.slice - libcontainer container kubepods-burstable-pod0c92231c72a3acfb3fdb04074739fecd.slice. Jul 15 23:59:14.307884 systemd[1]: Created slice kubepods-burstable-podcc1d454e067a90bbeeaec70187a9c476.slice - libcontainer container kubepods-burstable-podcc1d454e067a90bbeeaec70187a9c476.slice. Jul 15 23:59:14.322875 systemd[1]: Created slice kubepods-burstable-pode2610df0603c4d86e5c2183807a0fca5.slice - libcontainer container kubepods-burstable-pode2610df0603c4d86e5c2183807a0fca5.slice. Jul 15 23:59:14.327925 kubelet[2386]: I0715 23:59:14.327877 2386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc1d454e067a90bbeeaec70187a9c476-ca-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"cc1d454e067a90bbeeaec70187a9c476\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.328217 kubelet[2386]: I0715 23:59:14.327944 2386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc1d454e067a90bbeeaec70187a9c476-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"cc1d454e067a90bbeeaec70187a9c476\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.328217 kubelet[2386]: I0715 23:59:14.328025 2386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc1d454e067a90bbeeaec70187a9c476-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"cc1d454e067a90bbeeaec70187a9c476\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.328217 kubelet[2386]: I0715 23:59:14.328086 2386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2610df0603c4d86e5c2183807a0fca5-kubeconfig\") pod \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"e2610df0603c4d86e5c2183807a0fca5\") " pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.328217 kubelet[2386]: I0715 23:59:14.328148 2386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c92231c72a3acfb3fdb04074739fecd-ca-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"0c92231c72a3acfb3fdb04074739fecd\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.328518 kubelet[2386]: I0715 23:59:14.328195 2386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c92231c72a3acfb3fdb04074739fecd-k8s-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"0c92231c72a3acfb3fdb04074739fecd\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.328518 kubelet[2386]: I0715 23:59:14.328232 2386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c92231c72a3acfb3fdb04074739fecd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"0c92231c72a3acfb3fdb04074739fecd\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.328518 kubelet[2386]: I0715 23:59:14.328261 2386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cc1d454e067a90bbeeaec70187a9c476-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"cc1d454e067a90bbeeaec70187a9c476\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.328518 kubelet[2386]: I0715 23:59:14.328322 2386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc1d454e067a90bbeeaec70187a9c476-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"cc1d454e067a90bbeeaec70187a9c476\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.332460 kubelet[2386]: E0715 23:59:14.332410 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f?timeout=10s\": dial tcp 10.128.0.76:6443: connect: connection refused" interval="400ms" Jul 15 23:59:14.336868 kubelet[2386]: I0715 23:59:14.336842 2386 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.337310 kubelet[2386]: E0715 23:59:14.337244 2386 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.76:6443/api/v1/nodes\": dial tcp 10.128.0.76:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.546630 kubelet[2386]: I0715 23:59:14.546477 2386 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.547651 kubelet[2386]: E0715 23:59:14.547611 2386 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.76:6443/api/v1/nodes\": dial tcp 10.128.0.76:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.604454 containerd[1585]: time="2025-07-15T23:59:14.604390308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f,Uid:0c92231c72a3acfb3fdb04074739fecd,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:14.620534 containerd[1585]: time="2025-07-15T23:59:14.620237754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f,Uid:cc1d454e067a90bbeeaec70187a9c476,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:14.631823 containerd[1585]: time="2025-07-15T23:59:14.631763298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f,Uid:e2610df0603c4d86e5c2183807a0fca5,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:14.643363 containerd[1585]: time="2025-07-15T23:59:14.643314654Z" level=info msg="connecting to shim acf5aab5b7d197b27308c684f1111aa67be1f4ee2e683056fcef00b65f03de49" address="unix:///run/containerd/s/898647bedc6a3ac90f95764c1212884b52c7309225515fe6b49dc67033fbe853" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:14.675925 containerd[1585]: time="2025-07-15T23:59:14.675865493Z" level=info msg="connecting to shim 164c6d30dd7b3bd90a2922807ed3e41156eb0f2b22818dfe47fd7188888387ce" address="unix:///run/containerd/s/0c47038a0e758cbd518cf31831d7b36650a9353646ec837936f7086d9a4d0a63" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:14.702119 containerd[1585]: time="2025-07-15T23:59:14.701933188Z" level=info msg="connecting to shim 82e5dd0329729121d6234846cdfbc4d74e5764f23a26ae8be29279a607bcbd33" address="unix:///run/containerd/s/64528514fd2fd072141a37519a4397986f1d1a761ac7052070f86756061867a1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:14.731409 systemd[1]: Started cri-containerd-acf5aab5b7d197b27308c684f1111aa67be1f4ee2e683056fcef00b65f03de49.scope - libcontainer container acf5aab5b7d197b27308c684f1111aa67be1f4ee2e683056fcef00b65f03de49. Jul 15 23:59:14.733693 kubelet[2386]: E0715 23:59:14.733599 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f?timeout=10s\": dial tcp 10.128.0.76:6443: connect: connection refused" interval="800ms" Jul 15 23:59:14.758617 systemd[1]: Started cri-containerd-164c6d30dd7b3bd90a2922807ed3e41156eb0f2b22818dfe47fd7188888387ce.scope - libcontainer container 164c6d30dd7b3bd90a2922807ed3e41156eb0f2b22818dfe47fd7188888387ce. Jul 15 23:59:14.771448 systemd[1]: Started cri-containerd-82e5dd0329729121d6234846cdfbc4d74e5764f23a26ae8be29279a607bcbd33.scope - libcontainer container 82e5dd0329729121d6234846cdfbc4d74e5764f23a26ae8be29279a607bcbd33. Jul 15 23:59:14.897165 containerd[1585]: time="2025-07-15T23:59:14.896364946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f,Uid:0c92231c72a3acfb3fdb04074739fecd,Namespace:kube-system,Attempt:0,} returns sandbox id \"acf5aab5b7d197b27308c684f1111aa67be1f4ee2e683056fcef00b65f03de49\"" Jul 15 23:59:14.903927 kubelet[2386]: E0715 23:59:14.903762 2386 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f" Jul 15 23:59:14.906915 containerd[1585]: time="2025-07-15T23:59:14.906430569Z" level=info msg="CreateContainer within sandbox \"acf5aab5b7d197b27308c684f1111aa67be1f4ee2e683056fcef00b65f03de49\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:59:14.918798 containerd[1585]: time="2025-07-15T23:59:14.918768995Z" level=info msg="Container f160fcb2c1b72ceaee11700921f6fe2a6796bf5435ba28d0b1787d83698dae3b: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:14.920543 containerd[1585]: time="2025-07-15T23:59:14.920513409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f,Uid:e2610df0603c4d86e5c2183807a0fca5,Namespace:kube-system,Attempt:0,} returns sandbox id \"82e5dd0329729121d6234846cdfbc4d74e5764f23a26ae8be29279a607bcbd33\"" Jul 15 23:59:14.923175 kubelet[2386]: E0715 23:59:14.923016 2386 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f" Jul 15 23:59:14.924482 containerd[1585]: time="2025-07-15T23:59:14.924414972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f,Uid:cc1d454e067a90bbeeaec70187a9c476,Namespace:kube-system,Attempt:0,} returns sandbox id \"164c6d30dd7b3bd90a2922807ed3e41156eb0f2b22818dfe47fd7188888387ce\"" Jul 15 23:59:14.925213 containerd[1585]: time="2025-07-15T23:59:14.925173018Z" level=info msg="CreateContainer within sandbox \"82e5dd0329729121d6234846cdfbc4d74e5764f23a26ae8be29279a607bcbd33\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:59:14.926605 kubelet[2386]: E0715 23:59:14.926576 2386 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04" Jul 15 23:59:14.928893 containerd[1585]: time="2025-07-15T23:59:14.928857473Z" level=info msg="CreateContainer within sandbox \"164c6d30dd7b3bd90a2922807ed3e41156eb0f2b22818dfe47fd7188888387ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:59:14.932821 containerd[1585]: time="2025-07-15T23:59:14.932782613Z" level=info msg="CreateContainer within sandbox \"acf5aab5b7d197b27308c684f1111aa67be1f4ee2e683056fcef00b65f03de49\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f160fcb2c1b72ceaee11700921f6fe2a6796bf5435ba28d0b1787d83698dae3b\"" Jul 15 23:59:14.933590 containerd[1585]: time="2025-07-15T23:59:14.933560448Z" level=info msg="StartContainer for \"f160fcb2c1b72ceaee11700921f6fe2a6796bf5435ba28d0b1787d83698dae3b\"" Jul 15 23:59:14.935454 containerd[1585]: time="2025-07-15T23:59:14.935418797Z" level=info msg="connecting to shim f160fcb2c1b72ceaee11700921f6fe2a6796bf5435ba28d0b1787d83698dae3b" address="unix:///run/containerd/s/898647bedc6a3ac90f95764c1212884b52c7309225515fe6b49dc67033fbe853" protocol=ttrpc version=3 Jul 15 23:59:14.936833 kubelet[2386]: W0715 23:59:14.936706 2386 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Jul 15 23:59:14.936833 kubelet[2386]: E0715 23:59:14.936794 2386 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.76:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:59:14.941781 containerd[1585]: time="2025-07-15T23:59:14.941747503Z" level=info msg="Container 86b2b1b5a8b8f3aad19af92a05adf6b484f6fa240f144b33b332f43c4e8cd6ad: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:14.942577 kubelet[2386]: W0715 23:59:14.942510 2386 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.76:6443: connect: connection refused Jul 15 23:59:14.943336 kubelet[2386]: E0715 23:59:14.942734 2386 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.76:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:59:14.946165 containerd[1585]: time="2025-07-15T23:59:14.946129257Z" level=info msg="Container 45f80f23e3c6664ad55f59ccd45cdf67b900837817c2ce3846db901079deaae2: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:14.954389 containerd[1585]: time="2025-07-15T23:59:14.954145530Z" level=info msg="CreateContainer within sandbox \"82e5dd0329729121d6234846cdfbc4d74e5764f23a26ae8be29279a607bcbd33\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"86b2b1b5a8b8f3aad19af92a05adf6b484f6fa240f144b33b332f43c4e8cd6ad\"" Jul 15 23:59:14.955580 kubelet[2386]: I0715 23:59:14.955552 2386 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.956327 kubelet[2386]: E0715 23:59:14.956293 2386 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.76:6443/api/v1/nodes\": dial tcp 10.128.0.76:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:14.957078 containerd[1585]: time="2025-07-15T23:59:14.957035843Z" level=info msg="StartContainer for \"86b2b1b5a8b8f3aad19af92a05adf6b484f6fa240f144b33b332f43c4e8cd6ad\"" Jul 15 23:59:14.958759 containerd[1585]: time="2025-07-15T23:59:14.958724447Z" level=info msg="connecting to shim 86b2b1b5a8b8f3aad19af92a05adf6b484f6fa240f144b33b332f43c4e8cd6ad" address="unix:///run/containerd/s/64528514fd2fd072141a37519a4397986f1d1a761ac7052070f86756061867a1" protocol=ttrpc version=3 Jul 15 23:59:14.961033 containerd[1585]: time="2025-07-15T23:59:14.960865272Z" level=info msg="CreateContainer within sandbox \"164c6d30dd7b3bd90a2922807ed3e41156eb0f2b22818dfe47fd7188888387ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"45f80f23e3c6664ad55f59ccd45cdf67b900837817c2ce3846db901079deaae2\"" Jul 15 23:59:14.961953 containerd[1585]: time="2025-07-15T23:59:14.961924521Z" level=info msg="StartContainer for \"45f80f23e3c6664ad55f59ccd45cdf67b900837817c2ce3846db901079deaae2\"" Jul 15 23:59:14.965109 containerd[1585]: time="2025-07-15T23:59:14.964963649Z" level=info msg="connecting to shim 45f80f23e3c6664ad55f59ccd45cdf67b900837817c2ce3846db901079deaae2" address="unix:///run/containerd/s/0c47038a0e758cbd518cf31831d7b36650a9353646ec837936f7086d9a4d0a63" protocol=ttrpc version=3 Jul 15 23:59:14.973479 systemd[1]: Started cri-containerd-f160fcb2c1b72ceaee11700921f6fe2a6796bf5435ba28d0b1787d83698dae3b.scope - libcontainer container f160fcb2c1b72ceaee11700921f6fe2a6796bf5435ba28d0b1787d83698dae3b. Jul 15 23:59:15.008464 systemd[1]: Started cri-containerd-45f80f23e3c6664ad55f59ccd45cdf67b900837817c2ce3846db901079deaae2.scope - libcontainer container 45f80f23e3c6664ad55f59ccd45cdf67b900837817c2ce3846db901079deaae2. Jul 15 23:59:15.025338 systemd[1]: Started cri-containerd-86b2b1b5a8b8f3aad19af92a05adf6b484f6fa240f144b33b332f43c4e8cd6ad.scope - libcontainer container 86b2b1b5a8b8f3aad19af92a05adf6b484f6fa240f144b33b332f43c4e8cd6ad. Jul 15 23:59:15.107162 containerd[1585]: time="2025-07-15T23:59:15.107101433Z" level=info msg="StartContainer for \"f160fcb2c1b72ceaee11700921f6fe2a6796bf5435ba28d0b1787d83698dae3b\" returns successfully" Jul 15 23:59:15.168979 containerd[1585]: time="2025-07-15T23:59:15.168587984Z" level=info msg="StartContainer for \"45f80f23e3c6664ad55f59ccd45cdf67b900837817c2ce3846db901079deaae2\" returns successfully" Jul 15 23:59:15.209384 containerd[1585]: time="2025-07-15T23:59:15.209338864Z" level=info msg="StartContainer for \"86b2b1b5a8b8f3aad19af92a05adf6b484f6fa240f144b33b332f43c4e8cd6ad\" returns successfully" Jul 15 23:59:15.763773 kubelet[2386]: I0715 23:59:15.763728 2386 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:18.112458 kubelet[2386]: I0715 23:59:18.112282 2386 apiserver.go:52] "Watching apiserver" Jul 15 23:59:18.114449 kubelet[2386]: E0715 23:59:18.114307 2386 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" not found" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:18.126836 kubelet[2386]: I0715 23:59:18.126778 2386 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 23:59:18.194421 kubelet[2386]: I0715 23:59:18.194372 2386 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:18.599905 kubelet[2386]: E0715 23:59:18.599612 2386 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:19.196166 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 23:59:19.532702 kubelet[2386]: W0715 23:59:19.532347 2386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:59:20.309680 systemd[1]: Reload requested from client PID 2662 ('systemctl') (unit session-7.scope)... Jul 15 23:59:20.309703 systemd[1]: Reloading... Jul 15 23:59:20.454141 zram_generator::config[2702]: No configuration found. Jul 15 23:59:20.588036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:59:20.621459 kubelet[2386]: W0715 23:59:20.621381 2386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:59:20.790036 systemd[1]: Reloading finished in 479 ms. Jul 15 23:59:20.830078 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:59:20.846994 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:59:20.847637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:59:20.847743 systemd[1]: kubelet.service: Consumed 1.329s CPU time, 130.6M memory peak. Jul 15 23:59:20.850698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:59:21.155227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:59:21.168929 (kubelet)[2754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:59:21.244586 kubelet[2754]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:59:21.244586 kubelet[2754]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 23:59:21.244586 kubelet[2754]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:59:21.245164 kubelet[2754]: I0715 23:59:21.244724 2754 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:59:21.256773 kubelet[2754]: I0715 23:59:21.256401 2754 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 23:59:21.256773 kubelet[2754]: I0715 23:59:21.256432 2754 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:59:21.256953 kubelet[2754]: I0715 23:59:21.256782 2754 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 23:59:21.258513 kubelet[2754]: I0715 23:59:21.258478 2754 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 23:59:21.264026 kubelet[2754]: I0715 23:59:21.263979 2754 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:59:21.272400 kubelet[2754]: I0715 23:59:21.272220 2754 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:59:21.281036 kubelet[2754]: I0715 23:59:21.280826 2754 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:59:21.281462 kubelet[2754]: I0715 23:59:21.281282 2754 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 23:59:21.281653 kubelet[2754]: I0715 23:59:21.281523 2754 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:59:21.281984 kubelet[2754]: I0715 23:59:21.281563 2754 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:59:21.281984 kubelet[2754]: I0715 23:59:21.281836 2754 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:59:21.281984 kubelet[2754]: I0715 23:59:21.281853 2754 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 23:59:21.281984 kubelet[2754]: I0715 23:59:21.281893 2754 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:59:21.284487 kubelet[2754]: I0715 23:59:21.282049 2754 kubelet.go:408] "Attempting to sync node with API server" Jul 15 23:59:21.284487 kubelet[2754]: I0715 23:59:21.282069 2754 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:59:21.284487 kubelet[2754]: I0715 23:59:21.282128 2754 kubelet.go:314] "Adding apiserver pod source" Jul 15 23:59:21.284487 kubelet[2754]: I0715 23:59:21.282147 2754 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:59:21.287117 kubelet[2754]: I0715 23:59:21.284793 2754 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:59:21.287117 kubelet[2754]: I0715 23:59:21.285415 2754 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:59:21.287117 kubelet[2754]: I0715 23:59:21.285997 2754 server.go:1274] "Started kubelet" Jul 15 23:59:21.294134 kubelet[2754]: I0715 23:59:21.292851 2754 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:59:21.307325 kubelet[2754]: I0715 23:59:21.307269 2754 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:59:21.308955 kubelet[2754]: I0715 23:59:21.308926 2754 server.go:449] "Adding debug handlers to kubelet server" Jul 15 23:59:21.310531 kubelet[2754]: I0715 23:59:21.310491 2754 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:59:21.310879 kubelet[2754]: I0715 23:59:21.310856 2754 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:59:21.311360 kubelet[2754]: I0715 23:59:21.311335 2754 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:59:21.321247 kubelet[2754]: I0715 23:59:21.321218 2754 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 23:59:21.342978 kubelet[2754]: I0715 23:59:21.322152 2754 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 23:59:21.344582 sudo[2772]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 23:59:21.345156 sudo[2772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 23:59:21.345440 kubelet[2754]: I0715 23:59:21.345419 2754 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:59:21.345549 kubelet[2754]: E0715 23:59:21.322355 2754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" not found" Jul 15 23:59:21.351689 kubelet[2754]: I0715 23:59:21.351640 2754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:59:21.355608 kubelet[2754]: I0715 23:59:21.355580 2754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:59:21.355785 kubelet[2754]: I0715 23:59:21.355770 2754 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 23:59:21.355870 kubelet[2754]: I0715 23:59:21.355861 2754 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 23:59:21.356013 kubelet[2754]: E0715 23:59:21.355988 2754 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:59:21.370126 kubelet[2754]: I0715 23:59:21.368886 2754 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:59:21.371114 kubelet[2754]: I0715 23:59:21.370393 2754 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:59:21.380134 kubelet[2754]: I0715 23:59:21.379041 2754 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:59:21.462064 kubelet[2754]: E0715 23:59:21.458906 2754 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:59:21.463841 kubelet[2754]: I0715 23:59:21.463816 2754 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 23:59:21.464443 kubelet[2754]: I0715 23:59:21.464345 2754 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 23:59:21.465281 kubelet[2754]: I0715 23:59:21.465259 2754 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:59:21.465659 kubelet[2754]: I0715 23:59:21.465632 2754 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:59:21.465799 kubelet[2754]: I0715 23:59:21.465765 2754 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:59:21.465875 kubelet[2754]: I0715 23:59:21.465865 2754 policy_none.go:49] "None policy: Start" Jul 15 23:59:21.467270 kubelet[2754]: I0715 23:59:21.467251 2754 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 23:59:21.467512 kubelet[2754]: I0715 23:59:21.467500 2754 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:59:21.467862 kubelet[2754]: I0715 23:59:21.467792 2754 state_mem.go:75] "Updated machine memory state" Jul 15 23:59:21.478736 kubelet[2754]: I0715 23:59:21.478228 2754 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:59:21.480851 kubelet[2754]: I0715 23:59:21.479321 2754 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:59:21.480851 kubelet[2754]: I0715 23:59:21.479343 2754 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:59:21.480851 kubelet[2754]: I0715 23:59:21.479872 2754 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:59:21.603546 kubelet[2754]: I0715 23:59:21.603483 2754 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.618114 kubelet[2754]: I0715 23:59:21.618003 2754 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.618270 kubelet[2754]: I0715 23:59:21.618133 2754 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.670900 kubelet[2754]: W0715 23:59:21.670797 2754 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:59:21.671189 kubelet[2754]: E0715 23:59:21.671150 2754 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" already exists" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.673107 kubelet[2754]: W0715 23:59:21.673022 2754 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:59:21.673326 kubelet[2754]: E0715 23:59:21.673262 2754 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" already exists" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.675189 kubelet[2754]: W0715 23:59:21.674961 2754 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:59:21.748182 kubelet[2754]: I0715 23:59:21.748009 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cc1d454e067a90bbeeaec70187a9c476-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"cc1d454e067a90bbeeaec70187a9c476\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.749244 kubelet[2754]: I0715 23:59:21.748816 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2610df0603c4d86e5c2183807a0fca5-kubeconfig\") pod \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"e2610df0603c4d86e5c2183807a0fca5\") " pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.749244 kubelet[2754]: I0715 23:59:21.748871 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c92231c72a3acfb3fdb04074739fecd-ca-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"0c92231c72a3acfb3fdb04074739fecd\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.749244 kubelet[2754]: I0715 23:59:21.748903 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c92231c72a3acfb3fdb04074739fecd-k8s-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"0c92231c72a3acfb3fdb04074739fecd\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.749244 kubelet[2754]: I0715 23:59:21.748932 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cc1d454e067a90bbeeaec70187a9c476-ca-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"cc1d454e067a90bbeeaec70187a9c476\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.749458 kubelet[2754]: I0715 23:59:21.748962 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cc1d454e067a90bbeeaec70187a9c476-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"cc1d454e067a90bbeeaec70187a9c476\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.749458 kubelet[2754]: I0715 23:59:21.748988 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cc1d454e067a90bbeeaec70187a9c476-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"cc1d454e067a90bbeeaec70187a9c476\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.749458 kubelet[2754]: I0715 23:59:21.749016 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cc1d454e067a90bbeeaec70187a9c476-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"cc1d454e067a90bbeeaec70187a9c476\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:21.749458 kubelet[2754]: I0715 23:59:21.749044 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c92231c72a3acfb3fdb04074739fecd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" (UID: \"0c92231c72a3acfb3fdb04074739fecd\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" Jul 15 23:59:22.113566 sudo[2772]: pam_unix(sudo:session): session closed for user root Jul 15 23:59:22.283419 kubelet[2754]: I0715 23:59:22.283361 2754 apiserver.go:52] "Watching apiserver" Jul 15 23:59:22.346425 kubelet[2754]: I0715 23:59:22.346267 2754 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 23:59:22.512232 kubelet[2754]: I0715 23:59:22.511777 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" podStartSLOduration=3.51144241 podStartE2EDuration="3.51144241s" podCreationTimestamp="2025-07-15 23:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:59:22.489325182 +0000 UTC m=+1.313245712" watchObservedRunningTime="2025-07-15 23:59:22.51144241 +0000 UTC m=+1.335362940" Jul 15 23:59:22.529887 kubelet[2754]: I0715 23:59:22.529811 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" podStartSLOduration=1.5297907830000002 podStartE2EDuration="1.529790783s" podCreationTimestamp="2025-07-15 23:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:59:22.511233787 +0000 UTC m=+1.335154315" watchObservedRunningTime="2025-07-15 23:59:22.529790783 +0000 UTC m=+1.353711314" Jul 15 23:59:22.545763 kubelet[2754]: I0715 23:59:22.545602 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" podStartSLOduration=2.545533598 podStartE2EDuration="2.545533598s" podCreationTimestamp="2025-07-15 23:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:59:22.530369902 +0000 UTC m=+1.354290432" watchObservedRunningTime="2025-07-15 23:59:22.545533598 +0000 UTC m=+1.369454124" Jul 15 23:59:24.250670 sudo[1851]: pam_unix(sudo:session): session closed for user root Jul 15 23:59:24.293994 sshd[1850]: Connection closed by 139.178.89.65 port 48864 Jul 15 23:59:24.294994 sshd-session[1848]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:24.301384 systemd[1]: sshd@6-10.128.0.76:22-139.178.89.65:48864.service: Deactivated successfully. Jul 15 23:59:24.305221 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:59:24.305791 systemd[1]: session-7.scope: Consumed 6.713s CPU time, 267.9M memory peak. Jul 15 23:59:24.308299 systemd-logind[1527]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:59:24.310636 systemd-logind[1527]: Removed session 7. Jul 15 23:59:26.976896 kubelet[2754]: I0715 23:59:26.976853 2754 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:59:26.978109 kubelet[2754]: I0715 23:59:26.977674 2754 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:59:26.978236 containerd[1585]: time="2025-07-15T23:59:26.977341337Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:59:27.885780 systemd[1]: Created slice kubepods-besteffort-podf7ce20dd_14fd_465c_80f2_24078a1837c1.slice - libcontainer container kubepods-besteffort-podf7ce20dd_14fd_465c_80f2_24078a1837c1.slice. Jul 15 23:59:27.894231 kubelet[2754]: I0715 23:59:27.894168 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7ce20dd-14fd-465c-80f2-24078a1837c1-kube-proxy\") pod \"kube-proxy-7wwc4\" (UID: \"f7ce20dd-14fd-465c-80f2-24078a1837c1\") " pod="kube-system/kube-proxy-7wwc4" Jul 15 23:59:27.895823 kubelet[2754]: I0715 23:59:27.894729 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spnbw\" (UniqueName: \"kubernetes.io/projected/f7ce20dd-14fd-465c-80f2-24078a1837c1-kube-api-access-spnbw\") pod \"kube-proxy-7wwc4\" (UID: \"f7ce20dd-14fd-465c-80f2-24078a1837c1\") " pod="kube-system/kube-proxy-7wwc4" Jul 15 23:59:27.898242 kubelet[2754]: I0715 23:59:27.898213 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7ce20dd-14fd-465c-80f2-24078a1837c1-xtables-lock\") pod \"kube-proxy-7wwc4\" (UID: \"f7ce20dd-14fd-465c-80f2-24078a1837c1\") " pod="kube-system/kube-proxy-7wwc4" Jul 15 23:59:27.898435 kubelet[2754]: I0715 23:59:27.898413 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7ce20dd-14fd-465c-80f2-24078a1837c1-lib-modules\") pod \"kube-proxy-7wwc4\" (UID: \"f7ce20dd-14fd-465c-80f2-24078a1837c1\") " pod="kube-system/kube-proxy-7wwc4" Jul 15 23:59:27.913264 systemd[1]: Created slice kubepods-burstable-podd37e919e_df49_47ca_9ad7_f6312f5775fa.slice - libcontainer container kubepods-burstable-podd37e919e_df49_47ca_9ad7_f6312f5775fa.slice. Jul 15 23:59:27.998979 kubelet[2754]: I0715 23:59:27.998917 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnmts\" (UniqueName: \"kubernetes.io/projected/d37e919e-df49-47ca-9ad7-f6312f5775fa-kube-api-access-hnmts\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.998979 kubelet[2754]: I0715 23:59:27.998984 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-hostproc\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999611 kubelet[2754]: I0715 23:59:27.999017 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-xtables-lock\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999611 kubelet[2754]: I0715 23:59:27.999041 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-host-proc-sys-net\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999611 kubelet[2754]: I0715 23:59:27.999066 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-etc-cni-netd\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999611 kubelet[2754]: I0715 23:59:27.999110 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-cgroup\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999611 kubelet[2754]: I0715 23:59:27.999140 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d37e919e-df49-47ca-9ad7-f6312f5775fa-hubble-tls\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999611 kubelet[2754]: I0715 23:59:27.999164 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-run\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999898 kubelet[2754]: I0715 23:59:27.999259 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-lib-modules\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999898 kubelet[2754]: I0715 23:59:27.999287 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-host-proc-sys-kernel\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999898 kubelet[2754]: I0715 23:59:27.999312 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-bpf-maps\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999898 kubelet[2754]: I0715 23:59:27.999337 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cni-path\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999898 kubelet[2754]: I0715 23:59:27.999370 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d37e919e-df49-47ca-9ad7-f6312f5775fa-clustermesh-secrets\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:27.999898 kubelet[2754]: I0715 23:59:27.999401 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-config-path\") pod \"cilium-gcbhs\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " pod="kube-system/cilium-gcbhs" Jul 15 23:59:28.079413 systemd[1]: Created slice kubepods-besteffort-pod6c28cde6_8735_450d_bedd_88575fc4dba7.slice - libcontainer container kubepods-besteffort-pod6c28cde6_8735_450d_bedd_88575fc4dba7.slice. Jul 15 23:59:28.101018 kubelet[2754]: I0715 23:59:28.100460 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v7cx\" (UniqueName: \"kubernetes.io/projected/6c28cde6-8735-450d-bedd-88575fc4dba7-kube-api-access-7v7cx\") pod \"cilium-operator-5d85765b45-fj2t7\" (UID: \"6c28cde6-8735-450d-bedd-88575fc4dba7\") " pod="kube-system/cilium-operator-5d85765b45-fj2t7" Jul 15 23:59:28.101504 kubelet[2754]: I0715 23:59:28.101473 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c28cde6-8735-450d-bedd-88575fc4dba7-cilium-config-path\") pod \"cilium-operator-5d85765b45-fj2t7\" (UID: \"6c28cde6-8735-450d-bedd-88575fc4dba7\") " pod="kube-system/cilium-operator-5d85765b45-fj2t7" Jul 15 23:59:28.205556 containerd[1585]: time="2025-07-15T23:59:28.205419549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7wwc4,Uid:f7ce20dd-14fd-465c-80f2-24078a1837c1,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:28.225218 containerd[1585]: time="2025-07-15T23:59:28.224881871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gcbhs,Uid:d37e919e-df49-47ca-9ad7-f6312f5775fa,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:28.245894 containerd[1585]: time="2025-07-15T23:59:28.245826622Z" level=info msg="connecting to shim 467dc55d977570d0f1b678976d2e611698463dc9ec902a02a064cfe193b7bcb0" address="unix:///run/containerd/s/e86b8735ae800110c5af88b94d9fd0d5b94c48f20d39ca291f11716a1222e128" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:28.262338 containerd[1585]: time="2025-07-15T23:59:28.262278833Z" level=info msg="connecting to shim 6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee" address="unix:///run/containerd/s/755263dc48a6ca0316e30abeec3eb8b62f91feedcbd1a9ae0b3eaccd7357057b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:28.289644 systemd[1]: Started cri-containerd-467dc55d977570d0f1b678976d2e611698463dc9ec902a02a064cfe193b7bcb0.scope - libcontainer container 467dc55d977570d0f1b678976d2e611698463dc9ec902a02a064cfe193b7bcb0. Jul 15 23:59:28.313372 systemd[1]: Started cri-containerd-6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee.scope - libcontainer container 6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee. Jul 15 23:59:28.361839 containerd[1585]: time="2025-07-15T23:59:28.361753030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7wwc4,Uid:f7ce20dd-14fd-465c-80f2-24078a1837c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"467dc55d977570d0f1b678976d2e611698463dc9ec902a02a064cfe193b7bcb0\"" Jul 15 23:59:28.367192 containerd[1585]: time="2025-07-15T23:59:28.367116976Z" level=info msg="CreateContainer within sandbox \"467dc55d977570d0f1b678976d2e611698463dc9ec902a02a064cfe193b7bcb0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:59:28.378335 containerd[1585]: time="2025-07-15T23:59:28.378226960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gcbhs,Uid:d37e919e-df49-47ca-9ad7-f6312f5775fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\"" Jul 15 23:59:28.381911 containerd[1585]: time="2025-07-15T23:59:28.381832880Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 23:59:28.385376 containerd[1585]: time="2025-07-15T23:59:28.384784636Z" level=info msg="Container acb05d95ba5091a0e8ad3da492950eecba81e7a7b4220ea3ed3ec11167a52edd: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:28.387210 containerd[1585]: time="2025-07-15T23:59:28.387081092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fj2t7,Uid:6c28cde6-8735-450d-bedd-88575fc4dba7,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:28.403649 containerd[1585]: time="2025-07-15T23:59:28.403595788Z" level=info msg="CreateContainer within sandbox \"467dc55d977570d0f1b678976d2e611698463dc9ec902a02a064cfe193b7bcb0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"acb05d95ba5091a0e8ad3da492950eecba81e7a7b4220ea3ed3ec11167a52edd\"" Jul 15 23:59:28.406550 containerd[1585]: time="2025-07-15T23:59:28.406513885Z" level=info msg="StartContainer for \"acb05d95ba5091a0e8ad3da492950eecba81e7a7b4220ea3ed3ec11167a52edd\"" Jul 15 23:59:28.413953 containerd[1585]: time="2025-07-15T23:59:28.413885313Z" level=info msg="connecting to shim acb05d95ba5091a0e8ad3da492950eecba81e7a7b4220ea3ed3ec11167a52edd" address="unix:///run/containerd/s/e86b8735ae800110c5af88b94d9fd0d5b94c48f20d39ca291f11716a1222e128" protocol=ttrpc version=3 Jul 15 23:59:28.426167 containerd[1585]: time="2025-07-15T23:59:28.426084277Z" level=info msg="connecting to shim 2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9" address="unix:///run/containerd/s/496adf20fe1b000563658f22f9c3f13ee81fef75112aa3e4575ce20509fd4231" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:28.461396 systemd[1]: Started cri-containerd-acb05d95ba5091a0e8ad3da492950eecba81e7a7b4220ea3ed3ec11167a52edd.scope - libcontainer container acb05d95ba5091a0e8ad3da492950eecba81e7a7b4220ea3ed3ec11167a52edd. Jul 15 23:59:28.479385 systemd[1]: Started cri-containerd-2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9.scope - libcontainer container 2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9. Jul 15 23:59:28.546553 containerd[1585]: time="2025-07-15T23:59:28.545713586Z" level=info msg="StartContainer for \"acb05d95ba5091a0e8ad3da492950eecba81e7a7b4220ea3ed3ec11167a52edd\" returns successfully" Jul 15 23:59:28.583840 containerd[1585]: time="2025-07-15T23:59:28.583777404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fj2t7,Uid:6c28cde6-8735-450d-bedd-88575fc4dba7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\"" Jul 15 23:59:29.455753 kubelet[2754]: I0715 23:59:29.455654 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7wwc4" podStartSLOduration=2.455629482 podStartE2EDuration="2.455629482s" podCreationTimestamp="2025-07-15 23:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:59:29.455405038 +0000 UTC m=+8.279325569" watchObservedRunningTime="2025-07-15 23:59:29.455629482 +0000 UTC m=+8.279550011" Jul 15 23:59:33.210048 update_engine[1534]: I20250715 23:59:33.209966 1534 update_attempter.cc:509] Updating boot flags... Jul 15 23:59:33.237433 systemd[1]: Started sshd@7-10.128.0.76:22-195.178.110.211:49930.service - OpenSSH per-connection server daemon (195.178.110.211:49930). Jul 15 23:59:33.786844 sshd[3137]: Connection closed by 195.178.110.211 port 49930 Jul 15 23:59:33.789313 systemd[1]: sshd@7-10.128.0.76:22-195.178.110.211:49930.service: Deactivated successfully. Jul 15 23:59:35.129130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441644136.mount: Deactivated successfully. Jul 15 23:59:37.954328 containerd[1585]: time="2025-07-15T23:59:37.954244712Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:37.955803 containerd[1585]: time="2025-07-15T23:59:37.955756833Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 15 23:59:37.957278 containerd[1585]: time="2025-07-15T23:59:37.957213235Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:37.959650 containerd[1585]: time="2025-07-15T23:59:37.959391643Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.577509819s" Jul 15 23:59:37.959650 containerd[1585]: time="2025-07-15T23:59:37.959439456Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 23:59:37.961906 containerd[1585]: time="2025-07-15T23:59:37.961837631Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 23:59:37.965380 containerd[1585]: time="2025-07-15T23:59:37.965331736Z" level=info msg="CreateContainer within sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:59:37.978075 containerd[1585]: time="2025-07-15T23:59:37.978028823Z" level=info msg="Container dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:37.998889 containerd[1585]: time="2025-07-15T23:59:37.998240478Z" level=info msg="CreateContainer within sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\"" Jul 15 23:59:37.999200 containerd[1585]: time="2025-07-15T23:59:37.999164514Z" level=info msg="StartContainer for \"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\"" Jul 15 23:59:38.000729 containerd[1585]: time="2025-07-15T23:59:38.000642964Z" level=info msg="connecting to shim dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991" address="unix:///run/containerd/s/755263dc48a6ca0316e30abeec3eb8b62f91feedcbd1a9ae0b3eaccd7357057b" protocol=ttrpc version=3 Jul 15 23:59:38.034400 systemd[1]: Started cri-containerd-dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991.scope - libcontainer container dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991. Jul 15 23:59:38.080122 containerd[1585]: time="2025-07-15T23:59:38.080049848Z" level=info msg="StartContainer for \"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\" returns successfully" Jul 15 23:59:38.097764 systemd[1]: cri-containerd-dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991.scope: Deactivated successfully. Jul 15 23:59:38.103337 containerd[1585]: time="2025-07-15T23:59:38.103292062Z" level=info msg="received exit event container_id:\"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\" id:\"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\" pid:3191 exited_at:{seconds:1752623978 nanos:102603134}" Jul 15 23:59:38.103604 containerd[1585]: time="2025-07-15T23:59:38.103561052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\" id:\"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\" pid:3191 exited_at:{seconds:1752623978 nanos:102603134}" Jul 15 23:59:38.136788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991-rootfs.mount: Deactivated successfully. Jul 15 23:59:41.001424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352698879.mount: Deactivated successfully. Jul 15 23:59:41.494404 containerd[1585]: time="2025-07-15T23:59:41.494333124Z" level=info msg="CreateContainer within sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:59:41.521561 containerd[1585]: time="2025-07-15T23:59:41.521208379Z" level=info msg="Container d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:41.538669 containerd[1585]: time="2025-07-15T23:59:41.538611658Z" level=info msg="CreateContainer within sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\"" Jul 15 23:59:41.540249 containerd[1585]: time="2025-07-15T23:59:41.540131168Z" level=info msg="StartContainer for \"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\"" Jul 15 23:59:41.542457 containerd[1585]: time="2025-07-15T23:59:41.542398409Z" level=info msg="connecting to shim d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3" address="unix:///run/containerd/s/755263dc48a6ca0316e30abeec3eb8b62f91feedcbd1a9ae0b3eaccd7357057b" protocol=ttrpc version=3 Jul 15 23:59:41.599352 systemd[1]: Started cri-containerd-d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3.scope - libcontainer container d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3. Jul 15 23:59:41.684324 containerd[1585]: time="2025-07-15T23:59:41.684122972Z" level=info msg="StartContainer for \"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\" returns successfully" Jul 15 23:59:41.711269 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:59:41.711959 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:59:41.712943 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:59:41.717042 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:59:41.724858 systemd[1]: cri-containerd-d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3.scope: Deactivated successfully. Jul 15 23:59:41.727945 containerd[1585]: time="2025-07-15T23:59:41.727766941Z" level=info msg="received exit event container_id:\"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\" id:\"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\" pid:3247 exited_at:{seconds:1752623981 nanos:727158645}" Jul 15 23:59:41.731468 containerd[1585]: time="2025-07-15T23:59:41.731143143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\" id:\"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\" pid:3247 exited_at:{seconds:1752623981 nanos:727158645}" Jul 15 23:59:41.766381 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:59:41.986520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3-rootfs.mount: Deactivated successfully. Jul 15 23:59:42.383833 containerd[1585]: time="2025-07-15T23:59:42.383769933Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:42.385329 containerd[1585]: time="2025-07-15T23:59:42.385255202Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 15 23:59:42.387851 containerd[1585]: time="2025-07-15T23:59:42.387765335Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:59:42.389846 containerd[1585]: time="2025-07-15T23:59:42.389648192Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.42773188s" Jul 15 23:59:42.389846 containerd[1585]: time="2025-07-15T23:59:42.389696629Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 23:59:42.394444 containerd[1585]: time="2025-07-15T23:59:42.394142465Z" level=info msg="CreateContainer within sandbox \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 23:59:42.409707 containerd[1585]: time="2025-07-15T23:59:42.408768742Z" level=info msg="Container 84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:42.425926 containerd[1585]: time="2025-07-15T23:59:42.425874835Z" level=info msg="CreateContainer within sandbox \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\"" Jul 15 23:59:42.427198 containerd[1585]: time="2025-07-15T23:59:42.427157969Z" level=info msg="StartContainer for \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\"" Jul 15 23:59:42.429111 containerd[1585]: time="2025-07-15T23:59:42.429060107Z" level=info msg="connecting to shim 84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc" address="unix:///run/containerd/s/496adf20fe1b000563658f22f9c3f13ee81fef75112aa3e4575ce20509fd4231" protocol=ttrpc version=3 Jul 15 23:59:42.459406 systemd[1]: Started cri-containerd-84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc.scope - libcontainer container 84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc. Jul 15 23:59:42.507143 containerd[1585]: time="2025-07-15T23:59:42.507074580Z" level=info msg="CreateContainer within sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:59:42.519637 containerd[1585]: time="2025-07-15T23:59:42.519589203Z" level=info msg="StartContainer for \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\" returns successfully" Jul 15 23:59:42.536589 containerd[1585]: time="2025-07-15T23:59:42.533908862Z" level=info msg="Container 7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:42.555115 containerd[1585]: time="2025-07-15T23:59:42.555042645Z" level=info msg="CreateContainer within sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\"" Jul 15 23:59:42.557061 containerd[1585]: time="2025-07-15T23:59:42.557026242Z" level=info msg="StartContainer for \"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\"" Jul 15 23:59:42.560558 containerd[1585]: time="2025-07-15T23:59:42.560520787Z" level=info msg="connecting to shim 7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9" address="unix:///run/containerd/s/755263dc48a6ca0316e30abeec3eb8b62f91feedcbd1a9ae0b3eaccd7357057b" protocol=ttrpc version=3 Jul 15 23:59:42.593601 systemd[1]: Started cri-containerd-7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9.scope - libcontainer container 7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9. Jul 15 23:59:42.682407 containerd[1585]: time="2025-07-15T23:59:42.681728330Z" level=info msg="StartContainer for \"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\" returns successfully" Jul 15 23:59:42.681997 systemd[1]: cri-containerd-7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9.scope: Deactivated successfully. Jul 15 23:59:42.695339 containerd[1585]: time="2025-07-15T23:59:42.695289404Z" level=info msg="received exit event container_id:\"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\" id:\"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\" pid:3333 exited_at:{seconds:1752623982 nanos:692499631}" Jul 15 23:59:42.696353 containerd[1585]: time="2025-07-15T23:59:42.696220845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\" id:\"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\" pid:3333 exited_at:{seconds:1752623982 nanos:692499631}" Jul 15 23:59:43.529226 containerd[1585]: time="2025-07-15T23:59:43.529175042Z" level=info msg="CreateContainer within sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:59:43.546128 containerd[1585]: time="2025-07-15T23:59:43.545621781Z" level=info msg="Container e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:43.558195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2335587345.mount: Deactivated successfully. Jul 15 23:59:43.568432 containerd[1585]: time="2025-07-15T23:59:43.568373192Z" level=info msg="CreateContainer within sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\"" Jul 15 23:59:43.571521 containerd[1585]: time="2025-07-15T23:59:43.571485131Z" level=info msg="StartContainer for \"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\"" Jul 15 23:59:43.572825 containerd[1585]: time="2025-07-15T23:59:43.572775351Z" level=info msg="connecting to shim e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08" address="unix:///run/containerd/s/755263dc48a6ca0316e30abeec3eb8b62f91feedcbd1a9ae0b3eaccd7357057b" protocol=ttrpc version=3 Jul 15 23:59:43.629340 systemd[1]: Started cri-containerd-e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08.scope - libcontainer container e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08. Jul 15 23:59:43.713897 systemd[1]: cri-containerd-e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08.scope: Deactivated successfully. Jul 15 23:59:43.718547 containerd[1585]: time="2025-07-15T23:59:43.718466406Z" level=info msg="received exit event container_id:\"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\" id:\"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\" pid:3376 exited_at:{seconds:1752623983 nanos:718199485}" Jul 15 23:59:43.719279 containerd[1585]: time="2025-07-15T23:59:43.718746464Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\" id:\"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\" pid:3376 exited_at:{seconds:1752623983 nanos:718199485}" Jul 15 23:59:43.732251 kubelet[2754]: I0715 23:59:43.732170 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-fj2t7" podStartSLOduration=1.927851434 podStartE2EDuration="15.732141931s" podCreationTimestamp="2025-07-15 23:59:28 +0000 UTC" firstStartedPulling="2025-07-15 23:59:28.586553666 +0000 UTC m=+7.410474174" lastFinishedPulling="2025-07-15 23:59:42.390844146 +0000 UTC m=+21.214764671" observedRunningTime="2025-07-15 23:59:43.716347372 +0000 UTC m=+22.540267903" watchObservedRunningTime="2025-07-15 23:59:43.732141931 +0000 UTC m=+22.556062463" Jul 15 23:59:43.739538 containerd[1585]: time="2025-07-15T23:59:43.739491518Z" level=info msg="StartContainer for \"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\" returns successfully" Jul 15 23:59:43.772307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08-rootfs.mount: Deactivated successfully. Jul 15 23:59:44.538143 containerd[1585]: time="2025-07-15T23:59:44.538077219Z" level=info msg="CreateContainer within sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:59:44.559180 containerd[1585]: time="2025-07-15T23:59:44.558688638Z" level=info msg="Container d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:44.567861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177390845.mount: Deactivated successfully. Jul 15 23:59:44.577601 containerd[1585]: time="2025-07-15T23:59:44.577546799Z" level=info msg="CreateContainer within sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\"" Jul 15 23:59:44.578400 containerd[1585]: time="2025-07-15T23:59:44.578368353Z" level=info msg="StartContainer for \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\"" Jul 15 23:59:44.579915 containerd[1585]: time="2025-07-15T23:59:44.579871875Z" level=info msg="connecting to shim d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c" address="unix:///run/containerd/s/755263dc48a6ca0316e30abeec3eb8b62f91feedcbd1a9ae0b3eaccd7357057b" protocol=ttrpc version=3 Jul 15 23:59:44.614491 systemd[1]: Started cri-containerd-d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c.scope - libcontainer container d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c. Jul 15 23:59:44.685556 containerd[1585]: time="2025-07-15T23:59:44.685264934Z" level=info msg="StartContainer for \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" returns successfully" Jul 15 23:59:44.837301 containerd[1585]: time="2025-07-15T23:59:44.836834311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" id:\"97c85b2e951c3975716acf5d39970bf7647eb18aeca172f7135ade90dcdf7c21\" pid:3442 exited_at:{seconds:1752623984 nanos:836501042}" Jul 15 23:59:44.908131 kubelet[2754]: I0715 23:59:44.907226 2754 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 23:59:44.962579 systemd[1]: Created slice kubepods-burstable-pod0808a2a6_e960_4dec_8fe6_26bcb02d6492.slice - libcontainer container kubepods-burstable-pod0808a2a6_e960_4dec_8fe6_26bcb02d6492.slice. Jul 15 23:59:44.981241 systemd[1]: Created slice kubepods-burstable-pode42811dc_99cb_4f8d_8834_0425a96840a0.slice - libcontainer container kubepods-burstable-pode42811dc_99cb_4f8d_8834_0425a96840a0.slice. Jul 15 23:59:45.045427 kubelet[2754]: I0715 23:59:45.045360 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e42811dc-99cb-4f8d-8834-0425a96840a0-config-volume\") pod \"coredns-7c65d6cfc9-lk647\" (UID: \"e42811dc-99cb-4f8d-8834-0425a96840a0\") " pod="kube-system/coredns-7c65d6cfc9-lk647" Jul 15 23:59:45.045810 kubelet[2754]: I0715 23:59:45.045656 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt8pc\" (UniqueName: \"kubernetes.io/projected/e42811dc-99cb-4f8d-8834-0425a96840a0-kube-api-access-wt8pc\") pod \"coredns-7c65d6cfc9-lk647\" (UID: \"e42811dc-99cb-4f8d-8834-0425a96840a0\") " pod="kube-system/coredns-7c65d6cfc9-lk647" Jul 15 23:59:45.045810 kubelet[2754]: I0715 23:59:45.045721 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csljl\" (UniqueName: \"kubernetes.io/projected/0808a2a6-e960-4dec-8fe6-26bcb02d6492-kube-api-access-csljl\") pod \"coredns-7c65d6cfc9-kl2wk\" (UID: \"0808a2a6-e960-4dec-8fe6-26bcb02d6492\") " pod="kube-system/coredns-7c65d6cfc9-kl2wk" Jul 15 23:59:45.045810 kubelet[2754]: I0715 23:59:45.045754 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0808a2a6-e960-4dec-8fe6-26bcb02d6492-config-volume\") pod \"coredns-7c65d6cfc9-kl2wk\" (UID: \"0808a2a6-e960-4dec-8fe6-26bcb02d6492\") " pod="kube-system/coredns-7c65d6cfc9-kl2wk" Jul 15 23:59:45.275881 containerd[1585]: time="2025-07-15T23:59:45.275501410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kl2wk,Uid:0808a2a6-e960-4dec-8fe6-26bcb02d6492,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:45.293810 containerd[1585]: time="2025-07-15T23:59:45.293538663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lk647,Uid:e42811dc-99cb-4f8d-8834-0425a96840a0,Namespace:kube-system,Attempt:0,}" Jul 15 23:59:45.589782 kubelet[2754]: I0715 23:59:45.589581 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gcbhs" podStartSLOduration=9.00895766 podStartE2EDuration="18.589521615s" podCreationTimestamp="2025-07-15 23:59:27 +0000 UTC" firstStartedPulling="2025-07-15 23:59:28.380714926 +0000 UTC m=+7.204635432" lastFinishedPulling="2025-07-15 23:59:37.961278868 +0000 UTC m=+16.785199387" observedRunningTime="2025-07-15 23:59:45.585660965 +0000 UTC m=+24.409581496" watchObservedRunningTime="2025-07-15 23:59:45.589521615 +0000 UTC m=+24.413442146" Jul 15 23:59:47.185170 systemd-networkd[1448]: cilium_host: Link UP Jul 15 23:59:47.188057 systemd-networkd[1448]: cilium_net: Link UP Jul 15 23:59:47.188545 systemd-networkd[1448]: cilium_net: Gained carrier Jul 15 23:59:47.188837 systemd-networkd[1448]: cilium_host: Gained carrier Jul 15 23:59:47.372609 systemd-networkd[1448]: cilium_vxlan: Link UP Jul 15 23:59:47.372622 systemd-networkd[1448]: cilium_vxlan: Gained carrier Jul 15 23:59:47.669143 kernel: NET: Registered PF_ALG protocol family Jul 15 23:59:47.787548 systemd-networkd[1448]: cilium_net: Gained IPv6LL Jul 15 23:59:48.043296 systemd-networkd[1448]: cilium_host: Gained IPv6LL Jul 15 23:59:48.491296 systemd-networkd[1448]: cilium_vxlan: Gained IPv6LL Jul 15 23:59:48.525684 systemd-networkd[1448]: lxc_health: Link UP Jul 15 23:59:48.531190 systemd-networkd[1448]: lxc_health: Gained carrier Jul 15 23:59:48.861520 systemd-networkd[1448]: lxc5f6bb7308ed8: Link UP Jul 15 23:59:48.874129 kernel: eth0: renamed from tmp9af77 Jul 15 23:59:48.884185 systemd-networkd[1448]: lxc5f6bb7308ed8: Gained carrier Jul 15 23:59:48.932315 kernel: eth0: renamed from tmp55477 Jul 15 23:59:48.937367 systemd-networkd[1448]: lxcf625ac52b9fe: Link UP Jul 15 23:59:48.942436 systemd-networkd[1448]: lxcf625ac52b9fe: Gained carrier Jul 15 23:59:50.028215 systemd-networkd[1448]: lxc_health: Gained IPv6LL Jul 15 23:59:50.347755 systemd-networkd[1448]: lxcf625ac52b9fe: Gained IPv6LL Jul 15 23:59:50.860161 systemd-networkd[1448]: lxc5f6bb7308ed8: Gained IPv6LL Jul 15 23:59:53.280563 ntpd[1518]: Listen normally on 8 cilium_host 192.168.0.250:123 Jul 15 23:59:53.280695 ntpd[1518]: Listen normally on 9 cilium_net [fe80::3872:4cff:fe1c:4eb0%4]:123 Jul 15 23:59:53.281245 ntpd[1518]: 15 Jul 23:59:53 ntpd[1518]: Listen normally on 8 cilium_host 192.168.0.250:123 Jul 15 23:59:53.281245 ntpd[1518]: 15 Jul 23:59:53 ntpd[1518]: Listen normally on 9 cilium_net [fe80::3872:4cff:fe1c:4eb0%4]:123 Jul 15 23:59:53.281245 ntpd[1518]: 15 Jul 23:59:53 ntpd[1518]: Listen normally on 10 cilium_host [fe80::24c9:69ff:fe3c:f6f8%5]:123 Jul 15 23:59:53.281245 ntpd[1518]: 15 Jul 23:59:53 ntpd[1518]: Listen normally on 11 cilium_vxlan [fe80::3ce3:13ff:fe20:7c74%6]:123 Jul 15 23:59:53.281245 ntpd[1518]: 15 Jul 23:59:53 ntpd[1518]: Listen normally on 12 lxc_health [fe80::884b:fdff:fee5:5189%8]:123 Jul 15 23:59:53.281245 ntpd[1518]: 15 Jul 23:59:53 ntpd[1518]: Listen normally on 13 lxc5f6bb7308ed8 [fe80::58f5:a6ff:fe9b:2d3d%10]:123 Jul 15 23:59:53.281245 ntpd[1518]: 15 Jul 23:59:53 ntpd[1518]: Listen normally on 14 lxcf625ac52b9fe [fe80::90fc:57ff:fe1e:c682%12]:123 Jul 15 23:59:53.280778 ntpd[1518]: Listen normally on 10 cilium_host [fe80::24c9:69ff:fe3c:f6f8%5]:123 Jul 15 23:59:53.280839 ntpd[1518]: Listen normally on 11 cilium_vxlan [fe80::3ce3:13ff:fe20:7c74%6]:123 Jul 15 23:59:53.280893 ntpd[1518]: Listen normally on 12 lxc_health [fe80::884b:fdff:fee5:5189%8]:123 Jul 15 23:59:53.280949 ntpd[1518]: Listen normally on 13 lxc5f6bb7308ed8 [fe80::58f5:a6ff:fe9b:2d3d%10]:123 Jul 15 23:59:53.281001 ntpd[1518]: Listen normally on 14 lxcf625ac52b9fe [fe80::90fc:57ff:fe1e:c682%12]:123 Jul 15 23:59:54.073274 containerd[1585]: time="2025-07-15T23:59:54.072404510Z" level=info msg="connecting to shim 9af77c2904ff8472e3f58672cf181b785fd47bf489d61da4616c1ef79b95cf03" address="unix:///run/containerd/s/c24121cffd4184a05e5efebf165e1f3c7c827d4c23ecd5134e414ff80a344c96" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:54.083315 containerd[1585]: time="2025-07-15T23:59:54.083252207Z" level=info msg="connecting to shim 55477c4ec41890d06f773645d829567669886c43c87a7acd3dc4009502e9640f" address="unix:///run/containerd/s/332d762abc5a5419928fc809dab0dabea533b689bb57e3a46e43169640f5d291" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:59:54.156502 systemd[1]: Started cri-containerd-9af77c2904ff8472e3f58672cf181b785fd47bf489d61da4616c1ef79b95cf03.scope - libcontainer container 9af77c2904ff8472e3f58672cf181b785fd47bf489d61da4616c1ef79b95cf03. Jul 15 23:59:54.173326 systemd[1]: Started cri-containerd-55477c4ec41890d06f773645d829567669886c43c87a7acd3dc4009502e9640f.scope - libcontainer container 55477c4ec41890d06f773645d829567669886c43c87a7acd3dc4009502e9640f. Jul 15 23:59:54.282802 containerd[1585]: time="2025-07-15T23:59:54.282701527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kl2wk,Uid:0808a2a6-e960-4dec-8fe6-26bcb02d6492,Namespace:kube-system,Attempt:0,} returns sandbox id \"9af77c2904ff8472e3f58672cf181b785fd47bf489d61da4616c1ef79b95cf03\"" Jul 15 23:59:54.290218 containerd[1585]: time="2025-07-15T23:59:54.289657367Z" level=info msg="CreateContainer within sandbox \"9af77c2904ff8472e3f58672cf181b785fd47bf489d61da4616c1ef79b95cf03\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:59:54.305163 containerd[1585]: time="2025-07-15T23:59:54.305116243Z" level=info msg="Container cf4a904ef457c40c2af1fc4759de6caef6ef3cd1109aa792a68a469c44d49e49: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:54.319929 containerd[1585]: time="2025-07-15T23:59:54.319770953Z" level=info msg="CreateContainer within sandbox \"9af77c2904ff8472e3f58672cf181b785fd47bf489d61da4616c1ef79b95cf03\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf4a904ef457c40c2af1fc4759de6caef6ef3cd1109aa792a68a469c44d49e49\"" Jul 15 23:59:54.320735 containerd[1585]: time="2025-07-15T23:59:54.320706549Z" level=info msg="StartContainer for \"cf4a904ef457c40c2af1fc4759de6caef6ef3cd1109aa792a68a469c44d49e49\"" Jul 15 23:59:54.323456 containerd[1585]: time="2025-07-15T23:59:54.323261902Z" level=info msg="connecting to shim cf4a904ef457c40c2af1fc4759de6caef6ef3cd1109aa792a68a469c44d49e49" address="unix:///run/containerd/s/c24121cffd4184a05e5efebf165e1f3c7c827d4c23ecd5134e414ff80a344c96" protocol=ttrpc version=3 Jul 15 23:59:54.348590 containerd[1585]: time="2025-07-15T23:59:54.347824161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lk647,Uid:e42811dc-99cb-4f8d-8834-0425a96840a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"55477c4ec41890d06f773645d829567669886c43c87a7acd3dc4009502e9640f\"" Jul 15 23:59:54.355920 containerd[1585]: time="2025-07-15T23:59:54.355874615Z" level=info msg="CreateContainer within sandbox \"55477c4ec41890d06f773645d829567669886c43c87a7acd3dc4009502e9640f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:59:54.359406 systemd[1]: Started cri-containerd-cf4a904ef457c40c2af1fc4759de6caef6ef3cd1109aa792a68a469c44d49e49.scope - libcontainer container cf4a904ef457c40c2af1fc4759de6caef6ef3cd1109aa792a68a469c44d49e49. Jul 15 23:59:54.372144 containerd[1585]: time="2025-07-15T23:59:54.371761302Z" level=info msg="Container 0aaa9774b035f6a45b2b0b107ded9df3cc7ff9df1ed0f95c04ac31957f5c9772: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:59:54.382950 containerd[1585]: time="2025-07-15T23:59:54.382900947Z" level=info msg="CreateContainer within sandbox \"55477c4ec41890d06f773645d829567669886c43c87a7acd3dc4009502e9640f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0aaa9774b035f6a45b2b0b107ded9df3cc7ff9df1ed0f95c04ac31957f5c9772\"" Jul 15 23:59:54.384705 containerd[1585]: time="2025-07-15T23:59:54.384573400Z" level=info msg="StartContainer for \"0aaa9774b035f6a45b2b0b107ded9df3cc7ff9df1ed0f95c04ac31957f5c9772\"" Jul 15 23:59:54.387731 containerd[1585]: time="2025-07-15T23:59:54.387690180Z" level=info msg="connecting to shim 0aaa9774b035f6a45b2b0b107ded9df3cc7ff9df1ed0f95c04ac31957f5c9772" address="unix:///run/containerd/s/332d762abc5a5419928fc809dab0dabea533b689bb57e3a46e43169640f5d291" protocol=ttrpc version=3 Jul 15 23:59:54.422481 systemd[1]: Started cri-containerd-0aaa9774b035f6a45b2b0b107ded9df3cc7ff9df1ed0f95c04ac31957f5c9772.scope - libcontainer container 0aaa9774b035f6a45b2b0b107ded9df3cc7ff9df1ed0f95c04ac31957f5c9772. Jul 15 23:59:54.432281 containerd[1585]: time="2025-07-15T23:59:54.432086590Z" level=info msg="StartContainer for \"cf4a904ef457c40c2af1fc4759de6caef6ef3cd1109aa792a68a469c44d49e49\" returns successfully" Jul 15 23:59:54.495612 containerd[1585]: time="2025-07-15T23:59:54.495451848Z" level=info msg="StartContainer for \"0aaa9774b035f6a45b2b0b107ded9df3cc7ff9df1ed0f95c04ac31957f5c9772\" returns successfully" Jul 15 23:59:54.604897 kubelet[2754]: I0715 23:59:54.604663 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lk647" podStartSLOduration=26.604643677 podStartE2EDuration="26.604643677s" podCreationTimestamp="2025-07-15 23:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:59:54.603184218 +0000 UTC m=+33.427104748" watchObservedRunningTime="2025-07-15 23:59:54.604643677 +0000 UTC m=+33.428564207" Jul 15 23:59:54.631716 kubelet[2754]: I0715 23:59:54.631575 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kl2wk" podStartSLOduration=26.631553377 podStartE2EDuration="26.631553377s" podCreationTimestamp="2025-07-15 23:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:59:54.628748999 +0000 UTC m=+33.452669533" watchObservedRunningTime="2025-07-15 23:59:54.631553377 +0000 UTC m=+33.455473907" Jul 15 23:59:55.029955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3504500013.mount: Deactivated successfully. Jul 15 23:59:55.848522 kubelet[2754]: I0715 23:59:55.848140 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:00:14.874422 systemd[1]: Started sshd@8-10.128.0.76:22-139.178.89.65:57364.service - OpenSSH per-connection server daemon (139.178.89.65:57364). Jul 16 00:00:15.188445 sshd[4081]: Accepted publickey for core from 139.178.89.65 port 57364 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:15.191261 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:15.200553 systemd-logind[1527]: New session 8 of user core. Jul 16 00:00:15.205496 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 16 00:00:15.558887 sshd[4083]: Connection closed by 139.178.89.65 port 57364 Jul 16 00:00:15.560022 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:15.567846 systemd[1]: sshd@8-10.128.0.76:22-139.178.89.65:57364.service: Deactivated successfully. Jul 16 00:00:15.571879 systemd[1]: session-8.scope: Deactivated successfully. Jul 16 00:00:15.574241 systemd-logind[1527]: Session 8 logged out. Waiting for processes to exit. Jul 16 00:00:15.577241 systemd-logind[1527]: Removed session 8. Jul 16 00:00:20.621893 systemd[1]: Started sshd@9-10.128.0.76:22-139.178.89.65:47752.service - OpenSSH per-connection server daemon (139.178.89.65:47752). Jul 16 00:00:20.925563 sshd[4098]: Accepted publickey for core from 139.178.89.65 port 47752 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:20.928060 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:20.936172 systemd-logind[1527]: New session 9 of user core. Jul 16 00:00:20.941328 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 16 00:00:21.220851 sshd[4100]: Connection closed by 139.178.89.65 port 47752 Jul 16 00:00:21.221769 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:21.228124 systemd[1]: sshd@9-10.128.0.76:22-139.178.89.65:47752.service: Deactivated successfully. Jul 16 00:00:21.231631 systemd[1]: session-9.scope: Deactivated successfully. Jul 16 00:00:21.233356 systemd-logind[1527]: Session 9 logged out. Waiting for processes to exit. Jul 16 00:00:21.235809 systemd-logind[1527]: Removed session 9. Jul 16 00:00:26.277539 systemd[1]: Started sshd@10-10.128.0.76:22-139.178.89.65:47756.service - OpenSSH per-connection server daemon (139.178.89.65:47756). Jul 16 00:00:26.592042 sshd[4115]: Accepted publickey for core from 139.178.89.65 port 47756 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:26.593851 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:26.601426 systemd-logind[1527]: New session 10 of user core. Jul 16 00:00:26.607358 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 16 00:00:26.887760 sshd[4117]: Connection closed by 139.178.89.65 port 47756 Jul 16 00:00:26.889057 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:26.895013 systemd[1]: sshd@10-10.128.0.76:22-139.178.89.65:47756.service: Deactivated successfully. Jul 16 00:00:26.898033 systemd[1]: session-10.scope: Deactivated successfully. Jul 16 00:00:26.899663 systemd-logind[1527]: Session 10 logged out. Waiting for processes to exit. Jul 16 00:00:26.901706 systemd-logind[1527]: Removed session 10. Jul 16 00:00:31.947554 systemd[1]: Started sshd@11-10.128.0.76:22-139.178.89.65:47056.service - OpenSSH per-connection server daemon (139.178.89.65:47056). Jul 16 00:00:32.255432 sshd[4134]: Accepted publickey for core from 139.178.89.65 port 47056 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:32.257596 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:32.266989 systemd-logind[1527]: New session 11 of user core. Jul 16 00:00:32.273353 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 16 00:00:32.548375 sshd[4136]: Connection closed by 139.178.89.65 port 47056 Jul 16 00:00:32.549702 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:32.557184 systemd[1]: sshd@11-10.128.0.76:22-139.178.89.65:47056.service: Deactivated successfully. Jul 16 00:00:32.561450 systemd[1]: session-11.scope: Deactivated successfully. Jul 16 00:00:32.563542 systemd-logind[1527]: Session 11 logged out. Waiting for processes to exit. Jul 16 00:00:32.566162 systemd-logind[1527]: Removed session 11. Jul 16 00:00:37.605291 systemd[1]: Started sshd@12-10.128.0.76:22-139.178.89.65:47062.service - OpenSSH per-connection server daemon (139.178.89.65:47062). Jul 16 00:00:37.924330 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 47062 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:37.926517 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:37.934720 systemd-logind[1527]: New session 12 of user core. Jul 16 00:00:37.939400 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 16 00:00:38.230082 sshd[4151]: Connection closed by 139.178.89.65 port 47062 Jul 16 00:00:38.231253 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:38.238615 systemd[1]: sshd@12-10.128.0.76:22-139.178.89.65:47062.service: Deactivated successfully. Jul 16 00:00:38.241956 systemd[1]: session-12.scope: Deactivated successfully. Jul 16 00:00:38.243551 systemd-logind[1527]: Session 12 logged out. Waiting for processes to exit. Jul 16 00:00:38.246703 systemd-logind[1527]: Removed session 12. Jul 16 00:00:38.286863 systemd[1]: Started sshd@13-10.128.0.76:22-139.178.89.65:47072.service - OpenSSH per-connection server daemon (139.178.89.65:47072). Jul 16 00:00:38.607269 sshd[4164]: Accepted publickey for core from 139.178.89.65 port 47072 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:38.609371 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:38.620737 systemd-logind[1527]: New session 13 of user core. Jul 16 00:00:38.627377 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 16 00:00:38.932996 sshd[4166]: Connection closed by 139.178.89.65 port 47072 Jul 16 00:00:38.935566 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:38.944652 systemd[1]: sshd@13-10.128.0.76:22-139.178.89.65:47072.service: Deactivated successfully. Jul 16 00:00:38.947918 systemd[1]: session-13.scope: Deactivated successfully. Jul 16 00:00:38.950078 systemd-logind[1527]: Session 13 logged out. Waiting for processes to exit. Jul 16 00:00:38.952266 systemd-logind[1527]: Removed session 13. Jul 16 00:00:38.994130 systemd[1]: Started sshd@14-10.128.0.76:22-139.178.89.65:50874.service - OpenSSH per-connection server daemon (139.178.89.65:50874). Jul 16 00:00:39.304513 sshd[4175]: Accepted publickey for core from 139.178.89.65 port 50874 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:39.306336 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:39.313657 systemd-logind[1527]: New session 14 of user core. Jul 16 00:00:39.319288 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 16 00:00:39.600158 sshd[4177]: Connection closed by 139.178.89.65 port 50874 Jul 16 00:00:39.601028 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:39.606456 systemd[1]: sshd@14-10.128.0.76:22-139.178.89.65:50874.service: Deactivated successfully. Jul 16 00:00:39.610200 systemd[1]: session-14.scope: Deactivated successfully. Jul 16 00:00:39.614216 systemd-logind[1527]: Session 14 logged out. Waiting for processes to exit. Jul 16 00:00:39.619969 systemd-logind[1527]: Removed session 14. Jul 16 00:00:44.653862 systemd[1]: Started sshd@15-10.128.0.76:22-139.178.89.65:50890.service - OpenSSH per-connection server daemon (139.178.89.65:50890). Jul 16 00:00:44.960639 sshd[4190]: Accepted publickey for core from 139.178.89.65 port 50890 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:44.962446 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:44.970158 systemd-logind[1527]: New session 15 of user core. Jul 16 00:00:44.975369 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 16 00:00:45.257697 sshd[4193]: Connection closed by 139.178.89.65 port 50890 Jul 16 00:00:45.259042 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:45.265196 systemd[1]: sshd@15-10.128.0.76:22-139.178.89.65:50890.service: Deactivated successfully. Jul 16 00:00:45.268397 systemd[1]: session-15.scope: Deactivated successfully. Jul 16 00:00:45.270021 systemd-logind[1527]: Session 15 logged out. Waiting for processes to exit. Jul 16 00:00:45.272575 systemd-logind[1527]: Removed session 15. Jul 16 00:00:50.312219 systemd[1]: Started sshd@16-10.128.0.76:22-139.178.89.65:54674.service - OpenSSH per-connection server daemon (139.178.89.65:54674). Jul 16 00:00:50.626217 sshd[4205]: Accepted publickey for core from 139.178.89.65 port 54674 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:50.628289 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:50.635172 systemd-logind[1527]: New session 16 of user core. Jul 16 00:00:50.642340 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 16 00:00:50.922499 sshd[4208]: Connection closed by 139.178.89.65 port 54674 Jul 16 00:00:50.923412 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:50.929557 systemd[1]: sshd@16-10.128.0.76:22-139.178.89.65:54674.service: Deactivated successfully. Jul 16 00:00:50.932717 systemd[1]: session-16.scope: Deactivated successfully. Jul 16 00:00:50.934278 systemd-logind[1527]: Session 16 logged out. Waiting for processes to exit. Jul 16 00:00:50.936737 systemd-logind[1527]: Removed session 16. Jul 16 00:00:50.979048 systemd[1]: Started sshd@17-10.128.0.76:22-139.178.89.65:54678.service - OpenSSH per-connection server daemon (139.178.89.65:54678). Jul 16 00:00:51.285257 sshd[4220]: Accepted publickey for core from 139.178.89.65 port 54678 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:51.287302 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:51.294224 systemd-logind[1527]: New session 17 of user core. Jul 16 00:00:51.302312 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 16 00:00:51.646256 sshd[4222]: Connection closed by 139.178.89.65 port 54678 Jul 16 00:00:51.647492 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:51.653525 systemd[1]: sshd@17-10.128.0.76:22-139.178.89.65:54678.service: Deactivated successfully. Jul 16 00:00:51.656387 systemd[1]: session-17.scope: Deactivated successfully. Jul 16 00:00:51.658167 systemd-logind[1527]: Session 17 logged out. Waiting for processes to exit. Jul 16 00:00:51.660586 systemd-logind[1527]: Removed session 17. Jul 16 00:00:51.702634 systemd[1]: Started sshd@18-10.128.0.76:22-139.178.89.65:54686.service - OpenSSH per-connection server daemon (139.178.89.65:54686). Jul 16 00:00:52.007760 sshd[4232]: Accepted publickey for core from 139.178.89.65 port 54686 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:52.009545 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:52.017174 systemd-logind[1527]: New session 18 of user core. Jul 16 00:00:52.023333 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 16 00:00:53.774277 sshd[4234]: Connection closed by 139.178.89.65 port 54686 Jul 16 00:00:53.774841 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:53.783700 systemd[1]: sshd@18-10.128.0.76:22-139.178.89.65:54686.service: Deactivated successfully. Jul 16 00:00:53.788076 systemd[1]: session-18.scope: Deactivated successfully. Jul 16 00:00:53.791362 systemd-logind[1527]: Session 18 logged out. Waiting for processes to exit. Jul 16 00:00:53.794894 systemd-logind[1527]: Removed session 18. Jul 16 00:00:53.828243 systemd[1]: Started sshd@19-10.128.0.76:22-139.178.89.65:54694.service - OpenSSH per-connection server daemon (139.178.89.65:54694). Jul 16 00:00:54.134044 sshd[4251]: Accepted publickey for core from 139.178.89.65 port 54694 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:54.136352 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:54.145182 systemd-logind[1527]: New session 19 of user core. Jul 16 00:00:54.151363 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 16 00:00:54.549117 sshd[4253]: Connection closed by 139.178.89.65 port 54694 Jul 16 00:00:54.550050 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:54.556455 systemd[1]: sshd@19-10.128.0.76:22-139.178.89.65:54694.service: Deactivated successfully. Jul 16 00:00:54.559920 systemd[1]: session-19.scope: Deactivated successfully. Jul 16 00:00:54.561455 systemd-logind[1527]: Session 19 logged out. Waiting for processes to exit. Jul 16 00:00:54.563813 systemd-logind[1527]: Removed session 19. Jul 16 00:00:54.609753 systemd[1]: Started sshd@20-10.128.0.76:22-139.178.89.65:54710.service - OpenSSH per-connection server daemon (139.178.89.65:54710). Jul 16 00:00:54.917846 sshd[4262]: Accepted publickey for core from 139.178.89.65 port 54710 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:00:54.920037 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:00:54.926176 systemd-logind[1527]: New session 20 of user core. Jul 16 00:00:54.937319 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 16 00:00:55.224832 sshd[4264]: Connection closed by 139.178.89.65 port 54710 Jul 16 00:00:55.226153 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jul 16 00:00:55.233606 systemd[1]: sshd@20-10.128.0.76:22-139.178.89.65:54710.service: Deactivated successfully. Jul 16 00:00:55.237461 systemd[1]: session-20.scope: Deactivated successfully. Jul 16 00:00:55.239385 systemd-logind[1527]: Session 20 logged out. Waiting for processes to exit. Jul 16 00:00:55.243878 systemd-logind[1527]: Removed session 20. Jul 16 00:01:00.282707 systemd[1]: Started sshd@21-10.128.0.76:22-139.178.89.65:35254.service - OpenSSH per-connection server daemon (139.178.89.65:35254). Jul 16 00:01:00.585176 sshd[4281]: Accepted publickey for core from 139.178.89.65 port 35254 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:01:00.587609 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:00.596185 systemd-logind[1527]: New session 21 of user core. Jul 16 00:01:00.602324 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 16 00:01:00.877073 sshd[4283]: Connection closed by 139.178.89.65 port 35254 Jul 16 00:01:00.878382 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:00.884440 systemd[1]: sshd@21-10.128.0.76:22-139.178.89.65:35254.service: Deactivated successfully. Jul 16 00:01:00.887605 systemd[1]: session-21.scope: Deactivated successfully. Jul 16 00:01:00.889078 systemd-logind[1527]: Session 21 logged out. Waiting for processes to exit. Jul 16 00:01:00.891437 systemd-logind[1527]: Removed session 21. Jul 16 00:01:05.934600 systemd[1]: Started sshd@22-10.128.0.76:22-139.178.89.65:35262.service - OpenSSH per-connection server daemon (139.178.89.65:35262). Jul 16 00:01:06.249139 sshd[4295]: Accepted publickey for core from 139.178.89.65 port 35262 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:01:06.251418 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:06.259207 systemd-logind[1527]: New session 22 of user core. Jul 16 00:01:06.265453 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 16 00:01:06.555264 sshd[4297]: Connection closed by 139.178.89.65 port 35262 Jul 16 00:01:06.555454 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:06.563705 systemd[1]: sshd@22-10.128.0.76:22-139.178.89.65:35262.service: Deactivated successfully. Jul 16 00:01:06.568025 systemd[1]: session-22.scope: Deactivated successfully. Jul 16 00:01:06.570200 systemd-logind[1527]: Session 22 logged out. Waiting for processes to exit. Jul 16 00:01:06.572573 systemd-logind[1527]: Removed session 22. Jul 16 00:01:10.210257 update_engine[1534]: I20250716 00:01:10.210167 1534 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 16 00:01:10.210257 update_engine[1534]: I20250716 00:01:10.210238 1534 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 16 00:01:10.210967 update_engine[1534]: I20250716 00:01:10.210531 1534 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 16 00:01:10.211794 update_engine[1534]: I20250716 00:01:10.211740 1534 omaha_request_params.cc:62] Current group set to alpha Jul 16 00:01:10.212437 update_engine[1534]: I20250716 00:01:10.211906 1534 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 16 00:01:10.212437 update_engine[1534]: I20250716 00:01:10.211953 1534 update_attempter.cc:643] Scheduling an action processor start. Jul 16 00:01:10.212437 update_engine[1534]: I20250716 00:01:10.211983 1534 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 16 00:01:10.212437 update_engine[1534]: I20250716 00:01:10.212033 1534 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 16 00:01:10.212437 update_engine[1534]: I20250716 00:01:10.212162 1534 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 16 00:01:10.212437 update_engine[1534]: I20250716 00:01:10.212179 1534 omaha_request_action.cc:272] Request: Jul 16 00:01:10.212437 update_engine[1534]: Jul 16 00:01:10.212437 update_engine[1534]: Jul 16 00:01:10.212437 update_engine[1534]: Jul 16 00:01:10.212437 update_engine[1534]: Jul 16 00:01:10.212437 update_engine[1534]: Jul 16 00:01:10.212437 update_engine[1534]: Jul 16 00:01:10.212437 update_engine[1534]: Jul 16 00:01:10.212437 update_engine[1534]: Jul 16 00:01:10.212437 update_engine[1534]: I20250716 00:01:10.212190 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:01:10.213177 locksmithd[1616]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 16 00:01:10.214152 update_engine[1534]: I20250716 00:01:10.214077 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:01:10.214598 update_engine[1534]: I20250716 00:01:10.214541 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:01:10.669451 update_engine[1534]: E20250716 00:01:10.669215 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:01:10.669451 update_engine[1534]: I20250716 00:01:10.669396 1534 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 16 00:01:11.615304 systemd[1]: Started sshd@23-10.128.0.76:22-139.178.89.65:38542.service - OpenSSH per-connection server daemon (139.178.89.65:38542). Jul 16 00:01:11.929194 sshd[4309]: Accepted publickey for core from 139.178.89.65 port 38542 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:01:11.931265 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:11.939202 systemd-logind[1527]: New session 23 of user core. Jul 16 00:01:11.946399 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 16 00:01:12.235417 sshd[4311]: Connection closed by 139.178.89.65 port 38542 Jul 16 00:01:12.236742 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:12.244785 systemd[1]: sshd@23-10.128.0.76:22-139.178.89.65:38542.service: Deactivated successfully. Jul 16 00:01:12.248491 systemd[1]: session-23.scope: Deactivated successfully. Jul 16 00:01:12.250362 systemd-logind[1527]: Session 23 logged out. Waiting for processes to exit. Jul 16 00:01:12.253956 systemd-logind[1527]: Removed session 23. Jul 16 00:01:12.289562 systemd[1]: Started sshd@24-10.128.0.76:22-139.178.89.65:38544.service - OpenSSH per-connection server daemon (139.178.89.65:38544). Jul 16 00:01:12.599681 sshd[4323]: Accepted publickey for core from 139.178.89.65 port 38544 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:01:12.601664 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:12.608301 systemd-logind[1527]: New session 24 of user core. Jul 16 00:01:12.613343 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 16 00:01:14.866453 containerd[1585]: time="2025-07-16T00:01:14.866181120Z" level=info msg="StopContainer for \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\" with timeout 30 (s)" Jul 16 00:01:14.868923 containerd[1585]: time="2025-07-16T00:01:14.868869908Z" level=info msg="Stop container \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\" with signal terminated" Jul 16 00:01:14.897678 systemd[1]: cri-containerd-84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc.scope: Deactivated successfully. Jul 16 00:01:14.903038 containerd[1585]: time="2025-07-16T00:01:14.902972928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\" id:\"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\" pid:3301 exited_at:{seconds:1752624074 nanos:902310720}" Jul 16 00:01:14.903618 containerd[1585]: time="2025-07-16T00:01:14.903572842Z" level=info msg="received exit event container_id:\"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\" id:\"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\" pid:3301 exited_at:{seconds:1752624074 nanos:902310720}" Jul 16 00:01:14.904344 containerd[1585]: time="2025-07-16T00:01:14.904297534Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 00:01:14.914535 containerd[1585]: time="2025-07-16T00:01:14.914453980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" id:\"34ad19adbe749518a858c52d437e1e11a83c8db4e83ecb67397f613f268e8d64\" pid:4351 exited_at:{seconds:1752624074 nanos:913331693}" Jul 16 00:01:14.920942 containerd[1585]: time="2025-07-16T00:01:14.920613753Z" level=info msg="StopContainer for \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" with timeout 2 (s)" Jul 16 00:01:14.921556 containerd[1585]: time="2025-07-16T00:01:14.921523404Z" level=info msg="Stop container \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" with signal terminated" Jul 16 00:01:14.940492 systemd-networkd[1448]: lxc_health: Link DOWN Jul 16 00:01:14.940507 systemd-networkd[1448]: lxc_health: Lost carrier Jul 16 00:01:14.971395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc-rootfs.mount: Deactivated successfully. Jul 16 00:01:14.974076 systemd[1]: cri-containerd-d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c.scope: Deactivated successfully. Jul 16 00:01:14.976073 systemd[1]: cri-containerd-d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c.scope: Consumed 9.683s CPU time, 127.8M memory peak, 136K read from disk, 13.3M written to disk. Jul 16 00:01:14.982989 containerd[1585]: time="2025-07-16T00:01:14.982839472Z" level=info msg="received exit event container_id:\"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" id:\"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" pid:3411 exited_at:{seconds:1752624074 nanos:980624700}" Jul 16 00:01:14.984123 containerd[1585]: time="2025-07-16T00:01:14.983726157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" id:\"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" pid:3411 exited_at:{seconds:1752624074 nanos:980624700}" Jul 16 00:01:15.007479 containerd[1585]: time="2025-07-16T00:01:15.007416881Z" level=info msg="StopContainer for \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\" returns successfully" Jul 16 00:01:15.010377 containerd[1585]: time="2025-07-16T00:01:15.010308397Z" level=info msg="StopPodSandbox for \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\"" Jul 16 00:01:15.010846 containerd[1585]: time="2025-07-16T00:01:15.010723492Z" level=info msg="Container to stop \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 00:01:15.029995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c-rootfs.mount: Deactivated successfully. Jul 16 00:01:15.036675 systemd[1]: cri-containerd-2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9.scope: Deactivated successfully. Jul 16 00:01:15.044912 containerd[1585]: time="2025-07-16T00:01:15.044685641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\" id:\"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\" pid:2975 exit_status:137 exited_at:{seconds:1752624075 nanos:44044830}" Jul 16 00:01:15.049120 containerd[1585]: time="2025-07-16T00:01:15.048950053Z" level=info msg="StopContainer for \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" returns successfully" Jul 16 00:01:15.051014 containerd[1585]: time="2025-07-16T00:01:15.050495459Z" level=info msg="StopPodSandbox for \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\"" Jul 16 00:01:15.051014 containerd[1585]: time="2025-07-16T00:01:15.050602803Z" level=info msg="Container to stop \"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 00:01:15.051014 containerd[1585]: time="2025-07-16T00:01:15.050625776Z" level=info msg="Container to stop \"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 00:01:15.051014 containerd[1585]: time="2025-07-16T00:01:15.050647722Z" level=info msg="Container to stop \"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 00:01:15.051014 containerd[1585]: time="2025-07-16T00:01:15.050663342Z" level=info msg="Container to stop \"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 00:01:15.051014 containerd[1585]: time="2025-07-16T00:01:15.050696760Z" level=info msg="Container to stop \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 00:01:15.065390 systemd[1]: cri-containerd-6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee.scope: Deactivated successfully. Jul 16 00:01:15.116327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9-rootfs.mount: Deactivated successfully. Jul 16 00:01:15.121173 containerd[1585]: time="2025-07-16T00:01:15.121125030Z" level=info msg="shim disconnected" id=2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9 namespace=k8s.io Jul 16 00:01:15.121610 containerd[1585]: time="2025-07-16T00:01:15.121569991Z" level=warning msg="cleaning up after shim disconnected" id=2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9 namespace=k8s.io Jul 16 00:01:15.121895 containerd[1585]: time="2025-07-16T00:01:15.121824344Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 16 00:01:15.132763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee-rootfs.mount: Deactivated successfully. Jul 16 00:01:15.135800 containerd[1585]: time="2025-07-16T00:01:15.135728066Z" level=info msg="shim disconnected" id=6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee namespace=k8s.io Jul 16 00:01:15.136408 containerd[1585]: time="2025-07-16T00:01:15.136069388Z" level=warning msg="cleaning up after shim disconnected" id=6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee namespace=k8s.io Jul 16 00:01:15.136408 containerd[1585]: time="2025-07-16T00:01:15.136118279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 16 00:01:15.160228 containerd[1585]: time="2025-07-16T00:01:15.160156964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" id:\"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" pid:2902 exit_status:137 exited_at:{seconds:1752624075 nanos:74522385}" Jul 16 00:01:15.164566 containerd[1585]: time="2025-07-16T00:01:15.160821156Z" level=info msg="received exit event sandbox_id:\"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\" exit_status:137 exited_at:{seconds:1752624075 nanos:44044830}" Jul 16 00:01:15.164846 containerd[1585]: time="2025-07-16T00:01:15.164808542Z" level=info msg="received exit event sandbox_id:\"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" exit_status:137 exited_at:{seconds:1752624075 nanos:74522385}" Jul 16 00:01:15.167394 containerd[1585]: time="2025-07-16T00:01:15.162600399Z" level=info msg="TearDown network for sandbox \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\" successfully" Jul 16 00:01:15.167394 containerd[1585]: time="2025-07-16T00:01:15.166501159Z" level=info msg="StopPodSandbox for \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\" returns successfully" Jul 16 00:01:15.167394 containerd[1585]: time="2025-07-16T00:01:15.166756632Z" level=info msg="TearDown network for sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" successfully" Jul 16 00:01:15.167394 containerd[1585]: time="2025-07-16T00:01:15.166777946Z" level=info msg="StopPodSandbox for \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" returns successfully" Jul 16 00:01:15.166707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9-shm.mount: Deactivated successfully. Jul 16 00:01:15.277931 kubelet[2754]: I0716 00:01:15.277825 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnmts\" (UniqueName: \"kubernetes.io/projected/d37e919e-df49-47ca-9ad7-f6312f5775fa-kube-api-access-hnmts\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.279833 kubelet[2754]: I0716 00:01:15.278826 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-host-proc-sys-net\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.279833 kubelet[2754]: I0716 00:01:15.278882 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d37e919e-df49-47ca-9ad7-f6312f5775fa-hubble-tls\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.279833 kubelet[2754]: I0716 00:01:15.278912 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-run\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.279833 kubelet[2754]: I0716 00:01:15.278951 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-hostproc\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.279833 kubelet[2754]: I0716 00:01:15.278977 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-xtables-lock\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.279833 kubelet[2754]: I0716 00:01:15.279011 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-etc-cni-netd\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.280271 kubelet[2754]: I0716 00:01:15.279039 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cni-path\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.280271 kubelet[2754]: I0716 00:01:15.279084 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d37e919e-df49-47ca-9ad7-f6312f5775fa-clustermesh-secrets\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.280271 kubelet[2754]: I0716 00:01:15.279143 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c28cde6-8735-450d-bedd-88575fc4dba7-cilium-config-path\") pod \"6c28cde6-8735-450d-bedd-88575fc4dba7\" (UID: \"6c28cde6-8735-450d-bedd-88575fc4dba7\") " Jul 16 00:01:15.280271 kubelet[2754]: I0716 00:01:15.279177 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-config-path\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.280271 kubelet[2754]: I0716 00:01:15.279205 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-lib-modules\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.280271 kubelet[2754]: I0716 00:01:15.279234 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-cgroup\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.280561 kubelet[2754]: I0716 00:01:15.279263 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-host-proc-sys-kernel\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.280561 kubelet[2754]: I0716 00:01:15.279294 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-bpf-maps\") pod \"d37e919e-df49-47ca-9ad7-f6312f5775fa\" (UID: \"d37e919e-df49-47ca-9ad7-f6312f5775fa\") " Jul 16 00:01:15.280561 kubelet[2754]: I0716 00:01:15.279331 2754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v7cx\" (UniqueName: \"kubernetes.io/projected/6c28cde6-8735-450d-bedd-88575fc4dba7-kube-api-access-7v7cx\") pod \"6c28cde6-8735-450d-bedd-88575fc4dba7\" (UID: \"6c28cde6-8735-450d-bedd-88575fc4dba7\") " Jul 16 00:01:15.280561 kubelet[2754]: I0716 00:01:15.280375 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cni-path" (OuterVolumeSpecName: "cni-path") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 00:01:15.280561 kubelet[2754]: I0716 00:01:15.280464 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 00:01:15.281559 kubelet[2754]: I0716 00:01:15.281505 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 00:01:15.281668 kubelet[2754]: I0716 00:01:15.281578 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-hostproc" (OuterVolumeSpecName: "hostproc") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 00:01:15.281668 kubelet[2754]: I0716 00:01:15.281605 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 00:01:15.281668 kubelet[2754]: I0716 00:01:15.281633 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 00:01:15.281818 kubelet[2754]: I0716 00:01:15.281678 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 00:01:15.282507 kubelet[2754]: I0716 00:01:15.282449 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 00:01:15.282893 kubelet[2754]: I0716 00:01:15.282758 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 00:01:15.282893 kubelet[2754]: I0716 00:01:15.282847 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 00:01:15.293553 kubelet[2754]: I0716 00:01:15.293305 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37e919e-df49-47ca-9ad7-f6312f5775fa-kube-api-access-hnmts" (OuterVolumeSpecName: "kube-api-access-hnmts") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "kube-api-access-hnmts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 00:01:15.295188 kubelet[2754]: I0716 00:01:15.295133 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c28cde6-8735-450d-bedd-88575fc4dba7-kube-api-access-7v7cx" (OuterVolumeSpecName: "kube-api-access-7v7cx") pod "6c28cde6-8735-450d-bedd-88575fc4dba7" (UID: "6c28cde6-8735-450d-bedd-88575fc4dba7"). InnerVolumeSpecName "kube-api-access-7v7cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 00:01:15.295486 kubelet[2754]: I0716 00:01:15.295456 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37e919e-df49-47ca-9ad7-f6312f5775fa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 00:01:15.296781 kubelet[2754]: I0716 00:01:15.296730 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c28cde6-8735-450d-bedd-88575fc4dba7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c28cde6-8735-450d-bedd-88575fc4dba7" (UID: "6c28cde6-8735-450d-bedd-88575fc4dba7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 16 00:01:15.297405 kubelet[2754]: I0716 00:01:15.297376 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d37e919e-df49-47ca-9ad7-f6312f5775fa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 16 00:01:15.297500 kubelet[2754]: I0716 00:01:15.297351 2754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d37e919e-df49-47ca-9ad7-f6312f5775fa" (UID: "d37e919e-df49-47ca-9ad7-f6312f5775fa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 16 00:01:15.369204 systemd[1]: Removed slice kubepods-besteffort-pod6c28cde6_8735_450d_bedd_88575fc4dba7.slice - libcontainer container kubepods-besteffort-pod6c28cde6_8735_450d_bedd_88575fc4dba7.slice. Jul 16 00:01:15.373561 systemd[1]: Removed slice kubepods-burstable-podd37e919e_df49_47ca_9ad7_f6312f5775fa.slice - libcontainer container kubepods-burstable-podd37e919e_df49_47ca_9ad7_f6312f5775fa.slice. Jul 16 00:01:15.373771 systemd[1]: kubepods-burstable-podd37e919e_df49_47ca_9ad7_f6312f5775fa.slice: Consumed 9.835s CPU time, 128.3M memory peak, 136K read from disk, 13.3M written to disk. Jul 16 00:01:15.379877 kubelet[2754]: I0716 00:01:15.379825 2754 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d37e919e-df49-47ca-9ad7-f6312f5775fa-clustermesh-secrets\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.380319 kubelet[2754]: I0716 00:01:15.379995 2754 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c28cde6-8735-450d-bedd-88575fc4dba7-cilium-config-path\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.380319 kubelet[2754]: I0716 00:01:15.380147 2754 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-hostproc\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.380319 kubelet[2754]: I0716 00:01:15.380174 2754 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-xtables-lock\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.380807 kubelet[2754]: I0716 00:01:15.380193 2754 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-etc-cni-netd\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.380807 kubelet[2754]: I0716 00:01:15.380422 2754 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cni-path\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.380807 kubelet[2754]: I0716 00:01:15.380440 2754 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-lib-modules\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.380807 kubelet[2754]: I0716 00:01:15.380727 2754 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-config-path\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.381521 kubelet[2754]: I0716 00:01:15.380877 2754 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-cgroup\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.381521 kubelet[2754]: I0716 00:01:15.380903 2754 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-host-proc-sys-kernel\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.381521 kubelet[2754]: I0716 00:01:15.380920 2754 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-bpf-maps\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.381521 kubelet[2754]: I0716 00:01:15.381433 2754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v7cx\" (UniqueName: \"kubernetes.io/projected/6c28cde6-8735-450d-bedd-88575fc4dba7-kube-api-access-7v7cx\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.382219 kubelet[2754]: I0716 00:01:15.381687 2754 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d37e919e-df49-47ca-9ad7-f6312f5775fa-hubble-tls\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.382219 kubelet[2754]: I0716 00:01:15.381712 2754 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-cilium-run\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.382219 kubelet[2754]: I0716 00:01:15.381951 2754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnmts\" (UniqueName: \"kubernetes.io/projected/d37e919e-df49-47ca-9ad7-f6312f5775fa-kube-api-access-hnmts\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.382526 kubelet[2754]: I0716 00:01:15.381980 2754 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d37e919e-df49-47ca-9ad7-f6312f5775fa-host-proc-sys-net\") on node \"ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f\" DevicePath \"\"" Jul 16 00:01:15.828262 kubelet[2754]: I0716 00:01:15.825703 2754 scope.go:117] "RemoveContainer" containerID="d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c" Jul 16 00:01:15.833690 containerd[1585]: time="2025-07-16T00:01:15.833621780Z" level=info msg="RemoveContainer for \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\"" Jul 16 00:01:15.846539 containerd[1585]: time="2025-07-16T00:01:15.846471349Z" level=info msg="RemoveContainer for \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" returns successfully" Jul 16 00:01:15.847353 kubelet[2754]: I0716 00:01:15.847216 2754 scope.go:117] "RemoveContainer" containerID="e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08" Jul 16 00:01:15.850432 containerd[1585]: time="2025-07-16T00:01:15.850396029Z" level=info msg="RemoveContainer for \"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\"" Jul 16 00:01:15.864494 containerd[1585]: time="2025-07-16T00:01:15.864025419Z" level=info msg="RemoveContainer for \"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\" returns successfully" Jul 16 00:01:15.865667 kubelet[2754]: I0716 00:01:15.865616 2754 scope.go:117] "RemoveContainer" containerID="7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9" Jul 16 00:01:15.878835 containerd[1585]: time="2025-07-16T00:01:15.877271138Z" level=info msg="RemoveContainer for \"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\"" Jul 16 00:01:15.890163 containerd[1585]: time="2025-07-16T00:01:15.890065147Z" level=info msg="RemoveContainer for \"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\" returns successfully" Jul 16 00:01:15.890531 kubelet[2754]: I0716 00:01:15.890497 2754 scope.go:117] "RemoveContainer" containerID="d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3" Jul 16 00:01:15.892654 containerd[1585]: time="2025-07-16T00:01:15.892615350Z" level=info msg="RemoveContainer for \"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\"" Jul 16 00:01:15.897615 containerd[1585]: time="2025-07-16T00:01:15.897576307Z" level=info msg="RemoveContainer for \"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\" returns successfully" Jul 16 00:01:15.897915 kubelet[2754]: I0716 00:01:15.897891 2754 scope.go:117] "RemoveContainer" containerID="dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991" Jul 16 00:01:15.899875 containerd[1585]: time="2025-07-16T00:01:15.899786182Z" level=info msg="RemoveContainer for \"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\"" Jul 16 00:01:15.903718 containerd[1585]: time="2025-07-16T00:01:15.903683512Z" level=info msg="RemoveContainer for \"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\" returns successfully" Jul 16 00:01:15.904006 kubelet[2754]: I0716 00:01:15.903967 2754 scope.go:117] "RemoveContainer" containerID="d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c" Jul 16 00:01:15.904323 containerd[1585]: time="2025-07-16T00:01:15.904278237Z" level=error msg="ContainerStatus for \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\": not found" Jul 16 00:01:15.904614 kubelet[2754]: E0716 00:01:15.904493 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\": not found" containerID="d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c" Jul 16 00:01:15.904718 kubelet[2754]: I0716 00:01:15.904537 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c"} err="failed to get container status \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d98088d5dc5064b9d05a9b6755b647d6d31bc880271cb5cffb02f7374b56c65c\": not found" Jul 16 00:01:15.904718 kubelet[2754]: I0716 00:01:15.904640 2754 scope.go:117] "RemoveContainer" containerID="e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08" Jul 16 00:01:15.904980 containerd[1585]: time="2025-07-16T00:01:15.904863816Z" level=error msg="ContainerStatus for \"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\": not found" Jul 16 00:01:15.905108 kubelet[2754]: E0716 00:01:15.905015 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\": not found" containerID="e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08" Jul 16 00:01:15.905108 kubelet[2754]: I0716 00:01:15.905051 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08"} err="failed to get container status \"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\": rpc error: code = NotFound desc = an error occurred when try to find container \"e209299e866602692fb03c7343ab5f4031a57c7a893775d5f70f460f0d631a08\": not found" Jul 16 00:01:15.905259 kubelet[2754]: I0716 00:01:15.905080 2754 scope.go:117] "RemoveContainer" containerID="7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9" Jul 16 00:01:15.905506 containerd[1585]: time="2025-07-16T00:01:15.905445121Z" level=error msg="ContainerStatus for \"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\": not found" Jul 16 00:01:15.905778 kubelet[2754]: E0716 00:01:15.905673 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\": not found" containerID="7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9" Jul 16 00:01:15.905778 kubelet[2754]: I0716 00:01:15.905725 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9"} err="failed to get container status \"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7415c00db0bb66ad0c6f427551c1fa6b529bcb69b1ac05501579ec18ec8669d9\": not found" Jul 16 00:01:15.905778 kubelet[2754]: I0716 00:01:15.905749 2754 scope.go:117] "RemoveContainer" containerID="d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3" Jul 16 00:01:15.906007 containerd[1585]: time="2025-07-16T00:01:15.905966192Z" level=error msg="ContainerStatus for \"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\": not found" Jul 16 00:01:15.906212 kubelet[2754]: E0716 00:01:15.906177 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\": not found" containerID="d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3" Jul 16 00:01:15.906294 kubelet[2754]: I0716 00:01:15.906257 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3"} err="failed to get container status \"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6bdb0ee5247a49772539aa53ed9894f8db19ffca1eaa9ccc82b02a5707c44d3\": not found" Jul 16 00:01:15.906294 kubelet[2754]: I0716 00:01:15.906285 2754 scope.go:117] "RemoveContainer" containerID="dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991" Jul 16 00:01:15.906741 containerd[1585]: time="2025-07-16T00:01:15.906632225Z" level=error msg="ContainerStatus for \"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\": not found" Jul 16 00:01:15.907019 kubelet[2754]: E0716 00:01:15.906980 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\": not found" containerID="dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991" Jul 16 00:01:15.907142 kubelet[2754]: I0716 00:01:15.907033 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991"} err="failed to get container status \"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc51851c48b9b05aceb1f15c6c6d9bdca0336374913261fa51a6d37d4a67e991\": not found" Jul 16 00:01:15.907142 kubelet[2754]: I0716 00:01:15.907058 2754 scope.go:117] "RemoveContainer" containerID="84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc" Jul 16 00:01:15.909169 containerd[1585]: time="2025-07-16T00:01:15.909136590Z" level=info msg="RemoveContainer for \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\"" Jul 16 00:01:15.913711 containerd[1585]: time="2025-07-16T00:01:15.913681574Z" level=info msg="RemoveContainer for \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\" returns successfully" Jul 16 00:01:15.913912 kubelet[2754]: I0716 00:01:15.913889 2754 scope.go:117] "RemoveContainer" containerID="84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc" Jul 16 00:01:15.914327 containerd[1585]: time="2025-07-16T00:01:15.914285062Z" level=error msg="ContainerStatus for \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\": not found" Jul 16 00:01:15.914546 kubelet[2754]: E0716 00:01:15.914455 2754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\": not found" containerID="84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc" Jul 16 00:01:15.914546 kubelet[2754]: I0716 00:01:15.914498 2754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc"} err="failed to get container status \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"84247a90f4b1e02b9c3173a52d2bac76714f01be83e9184d6ee73cafa9070dbc\": not found" Jul 16 00:01:15.967995 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee-shm.mount: Deactivated successfully. Jul 16 00:01:15.968617 systemd[1]: var-lib-kubelet-pods-6c28cde6\x2d8735\x2d450d\x2dbedd\x2d88575fc4dba7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7v7cx.mount: Deactivated successfully. Jul 16 00:01:15.968779 systemd[1]: var-lib-kubelet-pods-d37e919e\x2ddf49\x2d47ca\x2d9ad7\x2df6312f5775fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhnmts.mount: Deactivated successfully. Jul 16 00:01:15.968896 systemd[1]: var-lib-kubelet-pods-d37e919e\x2ddf49\x2d47ca\x2d9ad7\x2df6312f5775fa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 16 00:01:15.969427 systemd[1]: var-lib-kubelet-pods-d37e919e\x2ddf49\x2d47ca\x2d9ad7\x2df6312f5775fa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 16 00:01:16.520782 kubelet[2754]: E0716 00:01:16.520699 2754 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 16 00:01:16.819024 sshd[4325]: Connection closed by 139.178.89.65 port 38544 Jul 16 00:01:16.818662 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:16.832376 systemd[1]: sshd@24-10.128.0.76:22-139.178.89.65:38544.service: Deactivated successfully. Jul 16 00:01:16.835774 systemd[1]: session-24.scope: Deactivated successfully. Jul 16 00:01:16.836157 systemd[1]: session-24.scope: Consumed 1.473s CPU time, 26M memory peak. Jul 16 00:01:16.837544 systemd-logind[1527]: Session 24 logged out. Waiting for processes to exit. Jul 16 00:01:16.842524 systemd-logind[1527]: Removed session 24. Jul 16 00:01:16.877433 systemd[1]: Started sshd@25-10.128.0.76:22-139.178.89.65:38548.service - OpenSSH per-connection server daemon (139.178.89.65:38548). Jul 16 00:01:17.198366 sshd[4479]: Accepted publickey for core from 139.178.89.65 port 38548 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:01:17.200343 sshd-session[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:17.209203 systemd-logind[1527]: New session 25 of user core. Jul 16 00:01:17.215454 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 16 00:01:17.280543 ntpd[1518]: Deleting interface #12 lxc_health, fe80::884b:fdff:fee5:5189%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Jul 16 00:01:17.281220 ntpd[1518]: 16 Jul 00:01:17 ntpd[1518]: Deleting interface #12 lxc_health, fe80::884b:fdff:fee5:5189%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Jul 16 00:01:17.361429 kubelet[2754]: I0716 00:01:17.361318 2754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c28cde6-8735-450d-bedd-88575fc4dba7" path="/var/lib/kubelet/pods/6c28cde6-8735-450d-bedd-88575fc4dba7/volumes" Jul 16 00:01:17.362190 kubelet[2754]: I0716 00:01:17.362142 2754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d37e919e-df49-47ca-9ad7-f6312f5775fa" path="/var/lib/kubelet/pods/d37e919e-df49-47ca-9ad7-f6312f5775fa/volumes" Jul 16 00:01:18.301308 sshd[4481]: Connection closed by 139.178.89.65 port 38548 Jul 16 00:01:18.304277 sshd-session[4479]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:18.310987 kubelet[2754]: E0716 00:01:18.310942 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d37e919e-df49-47ca-9ad7-f6312f5775fa" containerName="mount-cgroup" Jul 16 00:01:18.310987 kubelet[2754]: E0716 00:01:18.310985 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d37e919e-df49-47ca-9ad7-f6312f5775fa" containerName="clean-cilium-state" Jul 16 00:01:18.311667 kubelet[2754]: E0716 00:01:18.310998 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d37e919e-df49-47ca-9ad7-f6312f5775fa" containerName="cilium-agent" Jul 16 00:01:18.311667 kubelet[2754]: E0716 00:01:18.311012 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d37e919e-df49-47ca-9ad7-f6312f5775fa" containerName="apply-sysctl-overwrites" Jul 16 00:01:18.311667 kubelet[2754]: E0716 00:01:18.311023 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c28cde6-8735-450d-bedd-88575fc4dba7" containerName="cilium-operator" Jul 16 00:01:18.311667 kubelet[2754]: E0716 00:01:18.311037 2754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d37e919e-df49-47ca-9ad7-f6312f5775fa" containerName="mount-bpf-fs" Jul 16 00:01:18.313011 kubelet[2754]: I0716 00:01:18.311081 2754 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c28cde6-8735-450d-bedd-88575fc4dba7" containerName="cilium-operator" Jul 16 00:01:18.313011 kubelet[2754]: I0716 00:01:18.312149 2754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d37e919e-df49-47ca-9ad7-f6312f5775fa" containerName="cilium-agent" Jul 16 00:01:18.320508 systemd[1]: sshd@25-10.128.0.76:22-139.178.89.65:38548.service: Deactivated successfully. Jul 16 00:01:18.324942 systemd[1]: session-25.scope: Deactivated successfully. Jul 16 00:01:18.330358 systemd-logind[1527]: Session 25 logged out. Waiting for processes to exit. Jul 16 00:01:18.340498 systemd-logind[1527]: Removed session 25. Jul 16 00:01:18.372188 systemd[1]: Created slice kubepods-burstable-pod8baa40ff_b624_4d94_b966_89fa04f3536a.slice - libcontainer container kubepods-burstable-pod8baa40ff_b624_4d94_b966_89fa04f3536a.slice. Jul 16 00:01:18.375605 systemd[1]: Started sshd@26-10.128.0.76:22-139.178.89.65:38562.service - OpenSSH per-connection server daemon (139.178.89.65:38562). Jul 16 00:01:18.403882 kubelet[2754]: I0716 00:01:18.403746 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8baa40ff-b624-4d94-b966-89fa04f3536a-cilium-config-path\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.403882 kubelet[2754]: I0716 00:01:18.403807 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8baa40ff-b624-4d94-b966-89fa04f3536a-host-proc-sys-net\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.403882 kubelet[2754]: I0716 00:01:18.403852 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8baa40ff-b624-4d94-b966-89fa04f3536a-host-proc-sys-kernel\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.403882 kubelet[2754]: I0716 00:01:18.403881 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8baa40ff-b624-4d94-b966-89fa04f3536a-hubble-tls\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.405563 kubelet[2754]: I0716 00:01:18.403910 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8baa40ff-b624-4d94-b966-89fa04f3536a-etc-cni-netd\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.405563 kubelet[2754]: I0716 00:01:18.403937 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8baa40ff-b624-4d94-b966-89fa04f3536a-cni-path\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.405563 kubelet[2754]: I0716 00:01:18.403964 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8baa40ff-b624-4d94-b966-89fa04f3536a-lib-modules\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.405563 kubelet[2754]: I0716 00:01:18.403995 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8baa40ff-b624-4d94-b966-89fa04f3536a-clustermesh-secrets\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.405563 kubelet[2754]: I0716 00:01:18.404022 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggc9q\" (UniqueName: \"kubernetes.io/projected/8baa40ff-b624-4d94-b966-89fa04f3536a-kube-api-access-ggc9q\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.405563 kubelet[2754]: I0716 00:01:18.404053 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8baa40ff-b624-4d94-b966-89fa04f3536a-hostproc\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.408277 kubelet[2754]: I0716 00:01:18.404084 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8baa40ff-b624-4d94-b966-89fa04f3536a-xtables-lock\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.408277 kubelet[2754]: I0716 00:01:18.404138 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8baa40ff-b624-4d94-b966-89fa04f3536a-bpf-maps\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.408277 kubelet[2754]: I0716 00:01:18.404170 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8baa40ff-b624-4d94-b966-89fa04f3536a-cilium-run\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.408277 kubelet[2754]: I0716 00:01:18.404201 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8baa40ff-b624-4d94-b966-89fa04f3536a-cilium-ipsec-secrets\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.408277 kubelet[2754]: I0716 00:01:18.404233 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8baa40ff-b624-4d94-b966-89fa04f3536a-cilium-cgroup\") pod \"cilium-9m8lb\" (UID: \"8baa40ff-b624-4d94-b966-89fa04f3536a\") " pod="kube-system/cilium-9m8lb" Jul 16 00:01:18.683278 containerd[1585]: time="2025-07-16T00:01:18.683137777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9m8lb,Uid:8baa40ff-b624-4d94-b966-89fa04f3536a,Namespace:kube-system,Attempt:0,}" Jul 16 00:01:18.713396 containerd[1585]: time="2025-07-16T00:01:18.713279233Z" level=info msg="connecting to shim 1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0" address="unix:///run/containerd/s/12863af4eb5448390f49f3c5c4fd1ca355e6820e70bb65310e1e468df6d958bd" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:01:18.746344 systemd[1]: Started cri-containerd-1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0.scope - libcontainer container 1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0. Jul 16 00:01:18.754370 sshd[4492]: Accepted publickey for core from 139.178.89.65 port 38562 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:01:18.756966 sshd-session[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:18.770422 systemd-logind[1527]: New session 26 of user core. Jul 16 00:01:18.780306 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 16 00:01:18.812671 containerd[1585]: time="2025-07-16T00:01:18.812624002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9m8lb,Uid:8baa40ff-b624-4d94-b966-89fa04f3536a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0\"" Jul 16 00:01:18.816918 containerd[1585]: time="2025-07-16T00:01:18.816875226Z" level=info msg="CreateContainer within sandbox \"1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 16 00:01:18.825935 containerd[1585]: time="2025-07-16T00:01:18.825872269Z" level=info msg="Container 1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:18.834239 containerd[1585]: time="2025-07-16T00:01:18.834130378Z" level=info msg="CreateContainer within sandbox \"1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f\"" Jul 16 00:01:18.835034 containerd[1585]: time="2025-07-16T00:01:18.835002762Z" level=info msg="StartContainer for \"1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f\"" Jul 16 00:01:18.836680 containerd[1585]: time="2025-07-16T00:01:18.836584273Z" level=info msg="connecting to shim 1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f" address="unix:///run/containerd/s/12863af4eb5448390f49f3c5c4fd1ca355e6820e70bb65310e1e468df6d958bd" protocol=ttrpc version=3 Jul 16 00:01:18.869429 systemd[1]: Started cri-containerd-1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f.scope - libcontainer container 1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f. Jul 16 00:01:18.914248 containerd[1585]: time="2025-07-16T00:01:18.914162253Z" level=info msg="StartContainer for \"1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f\" returns successfully" Jul 16 00:01:18.928880 systemd[1]: cri-containerd-1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f.scope: Deactivated successfully. Jul 16 00:01:18.931860 containerd[1585]: time="2025-07-16T00:01:18.931621756Z" level=info msg="received exit event container_id:\"1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f\" id:\"1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f\" pid:4560 exited_at:{seconds:1752624078 nanos:931224181}" Jul 16 00:01:18.932462 containerd[1585]: time="2025-07-16T00:01:18.932125621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f\" id:\"1f3f3b7ad1a75ed3d303c6625bef9cbadbd071e71fcf29f57f5850deed71679f\" pid:4560 exited_at:{seconds:1752624078 nanos:931224181}" Jul 16 00:01:18.968229 sshd[4540]: Connection closed by 139.178.89.65 port 38562 Jul 16 00:01:18.968857 sshd-session[4492]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:18.976589 systemd[1]: sshd@26-10.128.0.76:22-139.178.89.65:38562.service: Deactivated successfully. Jul 16 00:01:18.981049 systemd[1]: session-26.scope: Deactivated successfully. Jul 16 00:01:18.982764 systemd-logind[1527]: Session 26 logged out. Waiting for processes to exit. Jul 16 00:01:18.985315 systemd-logind[1527]: Removed session 26. Jul 16 00:01:19.021678 systemd[1]: Started sshd@27-10.128.0.76:22-139.178.89.65:53072.service - OpenSSH per-connection server daemon (139.178.89.65:53072). Jul 16 00:01:19.331023 sshd[4599]: Accepted publickey for core from 139.178.89.65 port 53072 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 16 00:01:19.333151 sshd-session[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:01:19.342147 systemd-logind[1527]: New session 27 of user core. Jul 16 00:01:19.346311 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 16 00:01:19.867492 containerd[1585]: time="2025-07-16T00:01:19.867418491Z" level=info msg="CreateContainer within sandbox \"1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 16 00:01:19.881504 containerd[1585]: time="2025-07-16T00:01:19.881426612Z" level=info msg="Container ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:19.895070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638735066.mount: Deactivated successfully. Jul 16 00:01:19.899674 containerd[1585]: time="2025-07-16T00:01:19.899624498Z" level=info msg="CreateContainer within sandbox \"1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd\"" Jul 16 00:01:19.901999 containerd[1585]: time="2025-07-16T00:01:19.900401096Z" level=info msg="StartContainer for \"ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd\"" Jul 16 00:01:19.901999 containerd[1585]: time="2025-07-16T00:01:19.901544746Z" level=info msg="connecting to shim ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd" address="unix:///run/containerd/s/12863af4eb5448390f49f3c5c4fd1ca355e6820e70bb65310e1e468df6d958bd" protocol=ttrpc version=3 Jul 16 00:01:19.941325 systemd[1]: Started cri-containerd-ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd.scope - libcontainer container ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd. Jul 16 00:01:19.984214 containerd[1585]: time="2025-07-16T00:01:19.984160001Z" level=info msg="StartContainer for \"ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd\" returns successfully" Jul 16 00:01:19.992733 systemd[1]: cri-containerd-ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd.scope: Deactivated successfully. Jul 16 00:01:19.996480 containerd[1585]: time="2025-07-16T00:01:19.996329301Z" level=info msg="received exit event container_id:\"ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd\" id:\"ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd\" pid:4620 exited_at:{seconds:1752624079 nanos:995680565}" Jul 16 00:01:19.996833 containerd[1585]: time="2025-07-16T00:01:19.996781688Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd\" id:\"ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd\" pid:4620 exited_at:{seconds:1752624079 nanos:995680565}" Jul 16 00:01:20.027569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca1a9cea6a9904f116f852b2b6d68c48c3fbadfde62514f2a5e88d788de95fdd-rootfs.mount: Deactivated successfully. Jul 16 00:01:20.875133 containerd[1585]: time="2025-07-16T00:01:20.874538851Z" level=info msg="CreateContainer within sandbox \"1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 16 00:01:20.899124 containerd[1585]: time="2025-07-16T00:01:20.898317296Z" level=info msg="Container 4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:20.922003 containerd[1585]: time="2025-07-16T00:01:20.921918178Z" level=info msg="CreateContainer within sandbox \"1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407\"" Jul 16 00:01:20.922830 containerd[1585]: time="2025-07-16T00:01:20.922785284Z" level=info msg="StartContainer for \"4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407\"" Jul 16 00:01:20.928910 containerd[1585]: time="2025-07-16T00:01:20.928708113Z" level=info msg="connecting to shim 4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407" address="unix:///run/containerd/s/12863af4eb5448390f49f3c5c4fd1ca355e6820e70bb65310e1e468df6d958bd" protocol=ttrpc version=3 Jul 16 00:01:20.969664 systemd[1]: Started cri-containerd-4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407.scope - libcontainer container 4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407. Jul 16 00:01:21.036501 systemd[1]: cri-containerd-4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407.scope: Deactivated successfully. Jul 16 00:01:21.039550 containerd[1585]: time="2025-07-16T00:01:21.039224605Z" level=info msg="StartContainer for \"4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407\" returns successfully" Jul 16 00:01:21.043440 containerd[1585]: time="2025-07-16T00:01:21.043387058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407\" id:\"4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407\" pid:4667 exited_at:{seconds:1752624081 nanos:42821708}" Jul 16 00:01:21.043803 containerd[1585]: time="2025-07-16T00:01:21.043473749Z" level=info msg="received exit event container_id:\"4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407\" id:\"4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407\" pid:4667 exited_at:{seconds:1752624081 nanos:42821708}" Jul 16 00:01:21.085243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ea20f5a91197c66e6d4f012c286b463f01dce4b55eec1346619a3962c246407-rootfs.mount: Deactivated successfully. Jul 16 00:01:21.208703 update_engine[1534]: I20250716 00:01:21.208353 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:01:21.209387 update_engine[1534]: I20250716 00:01:21.208943 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:01:21.209613 update_engine[1534]: I20250716 00:01:21.209503 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:01:21.222539 update_engine[1534]: E20250716 00:01:21.222406 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:01:21.222753 update_engine[1534]: I20250716 00:01:21.222565 1534 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 16 00:01:21.384229 containerd[1585]: time="2025-07-16T00:01:21.384126743Z" level=info msg="StopPodSandbox for \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\"" Jul 16 00:01:21.384671 containerd[1585]: time="2025-07-16T00:01:21.384393423Z" level=info msg="TearDown network for sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" successfully" Jul 16 00:01:21.384671 containerd[1585]: time="2025-07-16T00:01:21.384418149Z" level=info msg="StopPodSandbox for \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" returns successfully" Jul 16 00:01:21.385736 containerd[1585]: time="2025-07-16T00:01:21.385285646Z" level=info msg="RemovePodSandbox for \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\"" Jul 16 00:01:21.385736 containerd[1585]: time="2025-07-16T00:01:21.385332371Z" level=info msg="Forcibly stopping sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\"" Jul 16 00:01:21.385736 containerd[1585]: time="2025-07-16T00:01:21.385582968Z" level=info msg="TearDown network for sandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" successfully" Jul 16 00:01:21.389995 containerd[1585]: time="2025-07-16T00:01:21.389866736Z" level=info msg="Ensure that sandbox 6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee in task-service has been cleanup successfully" Jul 16 00:01:21.402117 containerd[1585]: time="2025-07-16T00:01:21.401928017Z" level=info msg="RemovePodSandbox \"6cac57f89b12d5500fc6c23c953271504e1144227407d9982afac88a45744dee\" returns successfully" Jul 16 00:01:21.403209 containerd[1585]: time="2025-07-16T00:01:21.402841533Z" level=info msg="StopPodSandbox for \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\"" Jul 16 00:01:21.403209 containerd[1585]: time="2025-07-16T00:01:21.403042655Z" level=info msg="TearDown network for sandbox \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\" successfully" Jul 16 00:01:21.403209 containerd[1585]: time="2025-07-16T00:01:21.403064284Z" level=info msg="StopPodSandbox for \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\" returns successfully" Jul 16 00:01:21.403960 containerd[1585]: time="2025-07-16T00:01:21.403924009Z" level=info msg="RemovePodSandbox for \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\"" Jul 16 00:01:21.404070 containerd[1585]: time="2025-07-16T00:01:21.403969286Z" level=info msg="Forcibly stopping sandbox \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\"" Jul 16 00:01:21.404162 containerd[1585]: time="2025-07-16T00:01:21.404130042Z" level=info msg="TearDown network for sandbox \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\" successfully" Jul 16 00:01:21.406199 containerd[1585]: time="2025-07-16T00:01:21.406163424Z" level=info msg="Ensure that sandbox 2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9 in task-service has been cleanup successfully" Jul 16 00:01:21.410685 containerd[1585]: time="2025-07-16T00:01:21.410568442Z" level=info msg="RemovePodSandbox \"2e516a7ea7eb4f9c511da64714fccb934de3c5b6313852fc62aa0dc3474d5eb9\" returns successfully" Jul 16 00:01:21.522617 kubelet[2754]: E0716 00:01:21.522531 2754 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 16 00:01:21.885492 containerd[1585]: time="2025-07-16T00:01:21.885325374Z" level=info msg="CreateContainer within sandbox \"1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 16 00:01:21.909704 containerd[1585]: time="2025-07-16T00:01:21.907263757Z" level=info msg="Container ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:21.920990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183191626.mount: Deactivated successfully. Jul 16 00:01:21.928907 containerd[1585]: time="2025-07-16T00:01:21.928781536Z" level=info msg="CreateContainer within sandbox \"1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79\"" Jul 16 00:01:21.931074 containerd[1585]: time="2025-07-16T00:01:21.930982092Z" level=info msg="StartContainer for \"ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79\"" Jul 16 00:01:21.934300 containerd[1585]: time="2025-07-16T00:01:21.934214209Z" level=info msg="connecting to shim ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79" address="unix:///run/containerd/s/12863af4eb5448390f49f3c5c4fd1ca355e6820e70bb65310e1e468df6d958bd" protocol=ttrpc version=3 Jul 16 00:01:21.986444 systemd[1]: Started cri-containerd-ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79.scope - libcontainer container ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79. Jul 16 00:01:22.045517 systemd[1]: cri-containerd-ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79.scope: Deactivated successfully. Jul 16 00:01:22.048085 containerd[1585]: time="2025-07-16T00:01:22.048015875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79\" id:\"ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79\" pid:4711 exited_at:{seconds:1752624082 nanos:46807383}" Jul 16 00:01:22.051177 containerd[1585]: time="2025-07-16T00:01:22.050280347Z" level=info msg="received exit event container_id:\"ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79\" id:\"ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79\" pid:4711 exited_at:{seconds:1752624082 nanos:46807383}" Jul 16 00:01:22.066531 containerd[1585]: time="2025-07-16T00:01:22.066470070Z" level=info msg="StartContainer for \"ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79\" returns successfully" Jul 16 00:01:22.093931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad853088c2149a352a0a2cee37cfaf9e27573bb39d25449f1c25c6bbdd403f79-rootfs.mount: Deactivated successfully. Jul 16 00:01:22.899017 containerd[1585]: time="2025-07-16T00:01:22.898952349Z" level=info msg="CreateContainer within sandbox \"1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 16 00:01:22.916505 containerd[1585]: time="2025-07-16T00:01:22.916396768Z" level=info msg="Container 1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:01:22.932583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522829377.mount: Deactivated successfully. Jul 16 00:01:22.943230 containerd[1585]: time="2025-07-16T00:01:22.941675540Z" level=info msg="CreateContainer within sandbox \"1e71b4c2e947a0ba1cb130db7e03acdd8fd6f862d6ccefcebefecf63d1c5ebe0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79\"" Jul 16 00:01:22.950365 containerd[1585]: time="2025-07-16T00:01:22.949918634Z" level=info msg="StartContainer for \"1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79\"" Jul 16 00:01:22.952029 containerd[1585]: time="2025-07-16T00:01:22.951973520Z" level=info msg="connecting to shim 1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79" address="unix:///run/containerd/s/12863af4eb5448390f49f3c5c4fd1ca355e6820e70bb65310e1e468df6d958bd" protocol=ttrpc version=3 Jul 16 00:01:22.989421 systemd[1]: Started cri-containerd-1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79.scope - libcontainer container 1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79. Jul 16 00:01:23.048247 containerd[1585]: time="2025-07-16T00:01:23.048180496Z" level=info msg="StartContainer for \"1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79\" returns successfully" Jul 16 00:01:23.173397 containerd[1585]: time="2025-07-16T00:01:23.173167588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79\" id:\"4f4ec2bfb3a7d9a6fa59622d55be06743b63ca136c7be77e0d0a16549ec1948d\" pid:4776 exited_at:{seconds:1752624083 nanos:171837634}" Jul 16 00:01:23.677171 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 16 00:01:23.929787 kubelet[2754]: I0716 00:01:23.929391 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9m8lb" podStartSLOduration=5.929359844 podStartE2EDuration="5.929359844s" podCreationTimestamp="2025-07-16 00:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:01:23.927989068 +0000 UTC m=+122.751909625" watchObservedRunningTime="2025-07-16 00:01:23.929359844 +0000 UTC m=+122.753280370" Jul 16 00:01:24.095409 kubelet[2754]: I0716 00:01:24.095288 2754 setters.go:600] "Node became not ready" node="ci-4372-0-1-nightly-20250715-2100-9de04c7c4d034f35c91f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-16T00:01:24Z","lastTransitionTime":"2025-07-16T00:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 16 00:01:25.952440 containerd[1585]: time="2025-07-16T00:01:25.952370374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79\" id:\"6a0d8f11599111f840ae1f0897d12b8918e9f1f0820a18ebc5fb7a2225269a05\" pid:4951 exit_status:1 exited_at:{seconds:1752624085 nanos:950650613}" Jul 16 00:01:27.033723 systemd-networkd[1448]: lxc_health: Link UP Jul 16 00:01:27.046420 systemd-networkd[1448]: lxc_health: Gained carrier Jul 16 00:01:28.249376 containerd[1585]: time="2025-07-16T00:01:28.249312141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79\" id:\"53a49b57a9ac65c6a917ba4660301c3ee3bce6fd89d8d9975c9ed89a333a7c9a\" pid:5296 exited_at:{seconds:1752624088 nanos:248148395}" Jul 16 00:01:28.971996 systemd-networkd[1448]: lxc_health: Gained IPv6LL Jul 16 00:01:30.451460 containerd[1585]: time="2025-07-16T00:01:30.451395314Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79\" id:\"efcdf7185197d320fb68234a1e556e68fcd5745de0b1131eeb3776d8b5f3590a\" pid:5329 exited_at:{seconds:1752624090 nanos:449447713}" Jul 16 00:01:30.458435 kubelet[2754]: E0716 00:01:30.458365 2754 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60128->127.0.0.1:44959: write tcp 127.0.0.1:60128->127.0.0.1:44959: write: broken pipe Jul 16 00:01:31.217304 update_engine[1534]: I20250716 00:01:31.216184 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:01:31.217304 update_engine[1534]: I20250716 00:01:31.216718 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:01:31.217304 update_engine[1534]: I20250716 00:01:31.217219 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:01:31.244229 update_engine[1534]: E20250716 00:01:31.244156 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:01:31.244518 update_engine[1534]: I20250716 00:01:31.244479 1534 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 16 00:01:31.280546 ntpd[1518]: Listen normally on 15 lxc_health [fe80::98de:9dff:fe46:e84f%14]:123 Jul 16 00:01:31.281450 ntpd[1518]: 16 Jul 00:01:31 ntpd[1518]: Listen normally on 15 lxc_health [fe80::98de:9dff:fe46:e84f%14]:123 Jul 16 00:01:32.762454 containerd[1585]: time="2025-07-16T00:01:32.762335950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79\" id:\"0fa4c71a3d1b7d27417c17c9327998b66d11a8f675d11108eea10610f8676bad\" pid:5356 exited_at:{seconds:1752624092 nanos:760199891}" Jul 16 00:01:32.770619 kubelet[2754]: E0716 00:01:32.770532 2754 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:60142->127.0.0.1:44959: read tcp 127.0.0.1:60142->127.0.0.1:44959: read: connection reset by peer Jul 16 00:01:34.930016 containerd[1585]: time="2025-07-16T00:01:34.929955649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bdff73c7a0d2757bd7bc90f47c59dbffede7b6ddc929ee04910763e16022b79\" id:\"004ae7ddb7c7b8f746949d1c3e78a37970923d7c49684a2e55b28ec96009b36b\" pid:5385 exited_at:{seconds:1752624094 nanos:928890289}" Jul 16 00:01:34.978690 sshd[4601]: Connection closed by 139.178.89.65 port 53072 Jul 16 00:01:34.980178 sshd-session[4599]: pam_unix(sshd:session): session closed for user core Jul 16 00:01:34.985802 systemd[1]: sshd@27-10.128.0.76:22-139.178.89.65:53072.service: Deactivated successfully. Jul 16 00:01:34.989147 systemd[1]: session-27.scope: Deactivated successfully. Jul 16 00:01:34.992632 systemd-logind[1527]: Session 27 logged out. Waiting for processes to exit. Jul 16 00:01:34.994251 systemd-logind[1527]: Removed session 27.