Sep 9 05:41:20.638411 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 03:39:34 -00 2025 Sep 9 05:41:20.638464 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:41:20.638481 kernel: BIOS-provided physical RAM map: Sep 9 05:41:20.638494 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 9 05:41:20.638506 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 9 05:41:20.638518 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 9 05:41:20.638538 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 9 05:41:20.638552 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 9 05:41:20.638565 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd329fff] usable Sep 9 05:41:20.638579 kernel: BIOS-e820: [mem 0x00000000bd32a000-0x00000000bd331fff] ACPI data Sep 9 05:41:20.638592 kernel: BIOS-e820: [mem 0x00000000bd332000-0x00000000bf8ecfff] usable Sep 9 05:41:20.638605 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 9 05:41:20.638618 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 9 05:41:20.638632 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 9 05:41:20.638651 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 9 05:41:20.638666 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 9 05:41:20.638681 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 9 05:41:20.638697 kernel: NX (Execute Disable) protection: active Sep 9 05:41:20.638712 kernel: APIC: Static calls initialized Sep 9 05:41:20.638728 kernel: efi: EFI v2.7 by EDK II Sep 9 05:41:20.638744 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32a018 Sep 9 05:41:20.638759 kernel: random: crng init done Sep 9 05:41:20.638779 kernel: secureboot: Secure boot disabled Sep 9 05:41:20.638794 kernel: SMBIOS 2.4 present. Sep 9 05:41:20.638810 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 9 05:41:20.638825 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:41:20.638840 kernel: Hypervisor detected: KVM Sep 9 05:41:20.638855 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 05:41:20.638871 kernel: kvm-clock: using sched offset of 14444649668 cycles Sep 9 05:41:20.638887 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 05:41:20.638903 kernel: tsc: Detected 2299.998 MHz processor Sep 9 05:41:20.638919 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 05:41:20.638938 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 05:41:20.638953 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 9 05:41:20.638969 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 9 05:41:20.638984 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 05:41:20.638997 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 9 05:41:20.639013 kernel: Using GB pages for direct mapping Sep 9 05:41:20.639029 kernel: ACPI: Early table checksum verification disabled Sep 9 05:41:20.639046 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 9 05:41:20.639071 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 9 05:41:20.639088 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 9 05:41:20.639459 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 9 05:41:20.639481 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 9 05:41:20.639501 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 9 05:41:20.639522 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 9 05:41:20.639543 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 9 05:41:20.639560 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 9 05:41:20.639577 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 9 05:41:20.639594 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 9 05:41:20.639610 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 9 05:41:20.639628 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 9 05:41:20.639644 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 9 05:41:20.639660 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 9 05:41:20.639675 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 9 05:41:20.639694 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 9 05:41:20.639711 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 9 05:41:20.639727 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 9 05:41:20.639744 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 9 05:41:20.639765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 05:41:20.639798 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 9 05:41:20.639815 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 9 05:41:20.639844 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Sep 9 05:41:20.639862 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Sep 9 05:41:20.639881 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Sep 9 05:41:20.639899 kernel: Zone ranges: Sep 9 05:41:20.639916 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 05:41:20.639938 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 9 05:41:20.639953 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 9 05:41:20.639969 kernel: Device empty Sep 9 05:41:20.639985 kernel: Movable zone start for each node Sep 9 05:41:20.640003 kernel: Early memory node ranges Sep 9 05:41:20.640020 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 9 05:41:20.640037 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 9 05:41:20.640059 kernel: node 0: [mem 0x0000000000100000-0x00000000bd329fff] Sep 9 05:41:20.640076 kernel: node 0: [mem 0x00000000bd332000-0x00000000bf8ecfff] Sep 9 05:41:20.640094 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 9 05:41:20.640111 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 9 05:41:20.640128 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 9 05:41:20.640145 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 05:41:20.640163 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 9 05:41:20.640178 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 9 05:41:20.640195 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Sep 9 05:41:20.640217 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 05:41:20.640234 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 9 05:41:20.640738 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 9 05:41:20.640758 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 05:41:20.640775 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 05:41:20.640792 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 05:41:20.640809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 05:41:20.640826 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 05:41:20.640842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 05:41:20.640865 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 05:41:20.640882 kernel: CPU topo: Max. logical packages: 1 Sep 9 05:41:20.640898 kernel: CPU topo: Max. logical dies: 1 Sep 9 05:41:20.640914 kernel: CPU topo: Max. dies per package: 1 Sep 9 05:41:20.640931 kernel: CPU topo: Max. threads per core: 2 Sep 9 05:41:20.640948 kernel: CPU topo: Num. cores per package: 1 Sep 9 05:41:20.640964 kernel: CPU topo: Num. threads per package: 2 Sep 9 05:41:20.640981 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 9 05:41:20.640997 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 9 05:41:20.641017 kernel: Booting paravirtualized kernel on KVM Sep 9 05:41:20.641034 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 05:41:20.641051 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 9 05:41:20.641067 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 9 05:41:20.641084 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 9 05:41:20.641100 kernel: pcpu-alloc: [0] 0 1 Sep 9 05:41:20.641117 kernel: kvm-guest: PV spinlocks enabled Sep 9 05:41:20.641133 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 05:41:20.641151 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:41:20.641171 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:41:20.641188 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 9 05:41:20.641205 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 05:41:20.641222 kernel: Fallback order for Node 0: 0 Sep 9 05:41:20.641238 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Sep 9 05:41:20.641275 kernel: Policy zone: Normal Sep 9 05:41:20.641291 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:41:20.641343 kernel: software IO TLB: area num 2. Sep 9 05:41:20.641376 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 05:41:20.641395 kernel: Kernel/User page tables isolation: enabled Sep 9 05:41:20.641413 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 05:41:20.641445 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 05:41:20.641463 kernel: Dynamic Preempt: voluntary Sep 9 05:41:20.641483 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:41:20.641501 kernel: rcu: RCU event tracing is enabled. Sep 9 05:41:20.641518 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 05:41:20.641536 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:41:20.641558 kernel: Rude variant of Tasks RCU enabled. Sep 9 05:41:20.641576 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:41:20.641596 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:41:20.641623 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 05:41:20.641641 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:41:20.641657 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:41:20.641675 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:41:20.641696 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 9 05:41:20.641714 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:41:20.641733 kernel: Console: colour dummy device 80x25 Sep 9 05:41:20.641752 kernel: printk: legacy console [ttyS0] enabled Sep 9 05:41:20.641768 kernel: ACPI: Core revision 20240827 Sep 9 05:41:20.641785 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 05:41:20.641804 kernel: x2apic enabled Sep 9 05:41:20.641822 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 05:41:20.641841 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 9 05:41:20.641924 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 9 05:41:20.641947 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 9 05:41:20.641967 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 9 05:41:20.641986 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 9 05:41:20.642005 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 05:41:20.642024 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 9 05:41:20.642041 kernel: Spectre V2 : Mitigation: IBRS Sep 9 05:41:20.642059 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 05:41:20.642077 kernel: RETBleed: Mitigation: IBRS Sep 9 05:41:20.642099 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 05:41:20.642118 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 9 05:41:20.642137 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 05:41:20.642156 kernel: MDS: Mitigation: Clear CPU buffers Sep 9 05:41:20.642173 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 9 05:41:20.642191 kernel: active return thunk: its_return_thunk Sep 9 05:41:20.642207 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 05:41:20.642224 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 05:41:20.642267 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 05:41:20.642291 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 05:41:20.642309 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 05:41:20.642327 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 9 05:41:20.642346 kernel: Freeing SMP alternatives memory: 32K Sep 9 05:41:20.642365 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:41:20.642384 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:41:20.642403 kernel: landlock: Up and running. Sep 9 05:41:20.642432 kernel: SELinux: Initializing. Sep 9 05:41:20.642450 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 05:41:20.642471 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 05:41:20.642489 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 9 05:41:20.642507 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 9 05:41:20.642526 kernel: signal: max sigframe size: 1776 Sep 9 05:41:20.642544 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:41:20.642563 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:41:20.642581 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:41:20.642599 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 05:41:20.642618 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:41:20.642641 kernel: smpboot: x86: Booting SMP configuration: Sep 9 05:41:20.642664 kernel: .... node #0, CPUs: #1 Sep 9 05:41:20.642684 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 9 05:41:20.642704 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 9 05:41:20.642722 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 05:41:20.642741 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 9 05:41:20.642759 kernel: Memory: 7564024K/7860552K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 290704K reserved, 0K cma-reserved) Sep 9 05:41:20.642778 kernel: devtmpfs: initialized Sep 9 05:41:20.642800 kernel: x86/mm: Memory block size: 128MB Sep 9 05:41:20.642818 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 9 05:41:20.642837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:41:20.642855 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 05:41:20.642874 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:41:20.642893 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:41:20.642911 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:41:20.642931 kernel: audit: type=2000 audit(1757396475.796:1): state=initialized audit_enabled=0 res=1 Sep 9 05:41:20.642948 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:41:20.642971 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 05:41:20.642989 kernel: cpuidle: using governor menu Sep 9 05:41:20.643007 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:41:20.643026 kernel: dca service started, version 1.12.1 Sep 9 05:41:20.643044 kernel: PCI: Using configuration type 1 for base access Sep 9 05:41:20.643062 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 05:41:20.643080 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:41:20.643099 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:41:20.643117 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:41:20.643139 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:41:20.643158 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:41:20.643175 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:41:20.643194 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:41:20.643213 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 9 05:41:20.643231 kernel: ACPI: Interpreter enabled Sep 9 05:41:20.644293 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 05:41:20.644317 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 05:41:20.644337 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 05:41:20.644362 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 9 05:41:20.644380 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 9 05:41:20.644398 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:41:20.644674 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:41:20.644862 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 9 05:41:20.645042 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 9 05:41:20.645066 kernel: PCI host bridge to bus 0000:00 Sep 9 05:41:20.646284 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 05:41:20.646514 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 05:41:20.646689 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 05:41:20.646861 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 9 05:41:20.647053 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:41:20.655411 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:41:20.655652 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 9 05:41:20.655857 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 9 05:41:20.656048 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 9 05:41:20.656278 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Sep 9 05:41:20.656488 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Sep 9 05:41:20.656677 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Sep 9 05:41:20.656877 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 05:41:20.657070 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Sep 9 05:41:20.657288 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Sep 9 05:41:20.657494 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 05:41:20.657683 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Sep 9 05:41:20.657864 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Sep 9 05:41:20.657888 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 05:41:20.657908 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 05:41:20.657932 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 05:41:20.657951 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 05:41:20.657970 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 9 05:41:20.657990 kernel: iommu: Default domain type: Translated Sep 9 05:41:20.658009 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 05:41:20.658028 kernel: efivars: Registered efivars operations Sep 9 05:41:20.658047 kernel: PCI: Using ACPI for IRQ routing Sep 9 05:41:20.658066 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 05:41:20.658085 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 9 05:41:20.658108 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 9 05:41:20.658126 kernel: e820: reserve RAM buffer [mem 0xbd32a000-0xbfffffff] Sep 9 05:41:20.658145 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 9 05:41:20.658163 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 9 05:41:20.658180 kernel: vgaarb: loaded Sep 9 05:41:20.658199 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 05:41:20.658217 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:41:20.658237 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:41:20.658296 kernel: pnp: PnP ACPI init Sep 9 05:41:20.658320 kernel: pnp: PnP ACPI: found 7 devices Sep 9 05:41:20.658339 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 05:41:20.658359 kernel: NET: Registered PF_INET protocol family Sep 9 05:41:20.658378 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 05:41:20.658397 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 9 05:41:20.658424 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:41:20.658443 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 05:41:20.658462 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 9 05:41:20.658481 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 9 05:41:20.658505 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 9 05:41:20.658524 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 9 05:41:20.658543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:41:20.658562 kernel: NET: Registered PF_XDP protocol family Sep 9 05:41:20.658740 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 05:41:20.658907 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 05:41:20.659071 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 05:41:20.659234 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 9 05:41:20.659465 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 9 05:41:20.659492 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:41:20.659511 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 9 05:41:20.659531 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 9 05:41:20.659550 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 05:41:20.659569 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 9 05:41:20.659588 kernel: clocksource: Switched to clocksource tsc Sep 9 05:41:20.659607 kernel: Initialise system trusted keyrings Sep 9 05:41:20.659630 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 9 05:41:20.659648 kernel: Key type asymmetric registered Sep 9 05:41:20.659667 kernel: Asymmetric key parser 'x509' registered Sep 9 05:41:20.659685 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 05:41:20.659705 kernel: io scheduler mq-deadline registered Sep 9 05:41:20.659724 kernel: io scheduler kyber registered Sep 9 05:41:20.659742 kernel: io scheduler bfq registered Sep 9 05:41:20.659762 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 05:41:20.659781 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 9 05:41:20.659971 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 9 05:41:20.659996 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 9 05:41:20.660179 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 9 05:41:20.660204 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 9 05:41:20.660859 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 9 05:41:20.660887 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:41:20.660906 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 05:41:20.660925 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 9 05:41:20.660942 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 9 05:41:20.660968 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 9 05:41:20.661203 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 9 05:41:20.661232 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 05:41:20.661269 kernel: i8042: Warning: Keylock active Sep 9 05:41:20.661287 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 05:41:20.661305 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 05:41:20.661523 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 9 05:41:20.661719 kernel: rtc_cmos 00:00: registered as rtc0 Sep 9 05:41:20.661904 kernel: rtc_cmos 00:00: setting system clock to 2025-09-09T05:41:19 UTC (1757396479) Sep 9 05:41:20.662084 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 9 05:41:20.662108 kernel: intel_pstate: CPU model not supported Sep 9 05:41:20.662128 kernel: pstore: Using crash dump compression: deflate Sep 9 05:41:20.662148 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 05:41:20.662168 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:41:20.662186 kernel: Segment Routing with IPv6 Sep 9 05:41:20.662210 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:41:20.662226 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:41:20.662262 kernel: Key type dns_resolver registered Sep 9 05:41:20.662290 kernel: IPI shorthand broadcast: enabled Sep 9 05:41:20.662310 kernel: sched_clock: Marking stable (3542005343, 148791031)->(3728816323, -38019949) Sep 9 05:41:20.662329 kernel: registered taskstats version 1 Sep 9 05:41:20.662347 kernel: Loading compiled-in X.509 certificates Sep 9 05:41:20.662366 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 884b9ad6a330f59ae6e6488b20a5491e41ff24a3' Sep 9 05:41:20.662384 kernel: Demotion targets for Node 0: null Sep 9 05:41:20.662402 kernel: Key type .fscrypt registered Sep 9 05:41:20.662434 kernel: Key type fscrypt-provisioning registered Sep 9 05:41:20.662453 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:41:20.662472 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 9 05:41:20.662491 kernel: ima: No architecture policies found Sep 9 05:41:20.662509 kernel: clk: Disabling unused clocks Sep 9 05:41:20.662528 kernel: Warning: unable to open an initial console. Sep 9 05:41:20.662548 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 9 05:41:20.662567 kernel: Write protecting the kernel read-only data: 24576k Sep 9 05:41:20.662589 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 05:41:20.662608 kernel: Run /init as init process Sep 9 05:41:20.662627 kernel: with arguments: Sep 9 05:41:20.662650 kernel: /init Sep 9 05:41:20.662669 kernel: with environment: Sep 9 05:41:20.662687 kernel: HOME=/ Sep 9 05:41:20.662706 kernel: TERM=linux Sep 9 05:41:20.662725 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:41:20.662745 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:41:20.662774 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:41:20.662795 systemd[1]: Detected virtualization google. Sep 9 05:41:20.662814 systemd[1]: Detected architecture x86-64. Sep 9 05:41:20.662833 systemd[1]: Running in initrd. Sep 9 05:41:20.662851 systemd[1]: No hostname configured, using default hostname. Sep 9 05:41:20.662871 systemd[1]: Hostname set to . Sep 9 05:41:20.662890 systemd[1]: Initializing machine ID from random generator. Sep 9 05:41:20.662915 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:41:20.662954 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:41:20.662979 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:41:20.663000 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:41:20.663020 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:41:20.663041 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:41:20.663068 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:41:20.663091 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:41:20.663111 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:41:20.663132 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:41:20.663152 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:41:20.663172 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:41:20.663193 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:41:20.663217 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:41:20.663238 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:41:20.663278 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:41:20.663299 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:41:20.663320 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:41:20.663341 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:41:20.663361 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:41:20.663382 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:41:20.663406 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:41:20.663431 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:41:20.663449 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:41:20.663472 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:41:20.663496 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:41:20.663522 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:41:20.663549 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:41:20.663574 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:41:20.663601 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:41:20.663627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:41:20.663686 systemd-journald[208]: Collecting audit messages is disabled. Sep 9 05:41:20.663729 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:41:20.663756 systemd-journald[208]: Journal started Sep 9 05:41:20.663795 systemd-journald[208]: Runtime Journal (/run/log/journal/5b3e6bccf8c44ad8a7de4a607d55d4ee) is 8M, max 148.9M, 140.9M free. Sep 9 05:41:20.637301 systemd-modules-load[209]: Inserted module 'overlay' Sep 9 05:41:20.673262 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:41:20.674123 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:41:20.674804 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:41:20.679413 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:41:20.680521 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:41:20.710278 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:41:20.717883 systemd-modules-load[209]: Inserted module 'br_netfilter' Sep 9 05:41:20.794419 kernel: Bridge firewalling registered Sep 9 05:41:20.719384 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:41:20.726039 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:41:20.788028 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:41:20.804782 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:41:20.823787 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:41:20.839602 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:41:20.886579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:41:20.919521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:41:20.924354 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:41:20.934444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:41:20.945631 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:41:20.966738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:41:20.980416 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:41:20.993106 systemd-resolved[240]: Positive Trust Anchors: Sep 9 05:41:20.993116 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:41:20.993159 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:41:20.996521 systemd-resolved[240]: Defaulting to hostname 'linux'. Sep 9 05:41:21.101394 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:41:20.998214 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:41:21.012823 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:41:21.166538 kernel: SCSI subsystem initialized Sep 9 05:41:21.174290 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:41:21.190278 kernel: iscsi: registered transport (tcp) Sep 9 05:41:21.223297 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:41:21.223412 kernel: QLogic iSCSI HBA Driver Sep 9 05:41:21.248613 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:41:21.283715 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:41:21.308694 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:41:21.385028 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:41:21.387065 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:41:21.485303 kernel: raid6: avx2x4 gen() 21723 MB/s Sep 9 05:41:21.506286 kernel: raid6: avx2x2 gen() 23771 MB/s Sep 9 05:41:21.532285 kernel: raid6: avx2x1 gen() 20980 MB/s Sep 9 05:41:21.532392 kernel: raid6: using algorithm avx2x2 gen() 23771 MB/s Sep 9 05:41:21.559359 kernel: raid6: .... xor() 18532 MB/s, rmw enabled Sep 9 05:41:21.559440 kernel: raid6: using avx2x2 recovery algorithm Sep 9 05:41:21.589292 kernel: xor: automatically using best checksumming function avx Sep 9 05:41:21.779300 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:41:21.788510 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:41:21.799696 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:41:21.859921 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 9 05:41:21.868964 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:41:21.892791 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:41:21.932545 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Sep 9 05:41:21.968498 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:41:21.970668 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:41:22.078197 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:41:22.101947 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:41:22.208290 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 05:41:22.217976 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Sep 9 05:41:22.230290 kernel: scsi host0: Virtio SCSI HBA Sep 9 05:41:22.244390 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 9 05:41:22.253781 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 05:41:22.329496 kernel: AES CTR mode by8 optimization enabled Sep 9 05:41:22.333030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:41:22.389584 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 9 05:41:22.389994 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 9 05:41:22.390229 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 9 05:41:22.390473 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 9 05:41:22.390683 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 9 05:41:22.390898 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:41:22.390923 kernel: GPT:17805311 != 25165823 Sep 9 05:41:22.390946 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:41:22.333272 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:41:22.431634 kernel: GPT:17805311 != 25165823 Sep 9 05:41:22.431700 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:41:22.431736 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 05:41:22.431763 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 9 05:41:22.370908 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:41:22.411214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:41:22.441042 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:41:22.526858 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 9 05:41:22.527527 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:41:22.556816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:41:22.598837 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 9 05:41:22.618946 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 9 05:41:22.619226 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 9 05:41:22.652686 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 9 05:41:22.681582 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:41:22.681923 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:41:22.701720 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:41:22.737753 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:41:22.749774 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:41:22.780138 disk-uuid[606]: Primary Header is updated. Sep 9 05:41:22.780138 disk-uuid[606]: Secondary Entries is updated. Sep 9 05:41:22.780138 disk-uuid[606]: Secondary Header is updated. Sep 9 05:41:22.798659 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:41:22.812574 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 05:41:23.841272 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 05:41:23.841381 disk-uuid[607]: The operation has completed successfully. Sep 9 05:41:23.921239 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:41:23.921411 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:41:23.965722 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:41:23.993467 sh[628]: Success Sep 9 05:41:24.030703 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:41:24.030791 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:41:24.031042 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:41:24.057280 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 9 05:41:24.139641 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:41:24.144355 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:41:24.185131 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:41:24.214285 kernel: BTRFS: device fsid 9ca60a92-6b53-4529-adc0-1f4392d2ad56 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (640) Sep 9 05:41:24.232273 kernel: BTRFS info (device dm-0): first mount of filesystem 9ca60a92-6b53-4529-adc0-1f4392d2ad56 Sep 9 05:41:24.232340 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:41:24.262767 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 05:41:24.262859 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:41:24.262903 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:41:24.273206 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:41:24.274269 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:41:24.296623 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:41:24.297945 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:41:24.306742 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:41:24.374330 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (663) Sep 9 05:41:24.392573 kernel: BTRFS info (device sda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:41:24.392697 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:41:24.416535 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 05:41:24.416655 kernel: BTRFS info (device sda6): turning on async discard Sep 9 05:41:24.416696 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 05:41:24.432289 kernel: BTRFS info (device sda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:41:24.432954 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:41:24.454471 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:41:24.525385 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:41:24.528212 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:41:24.677231 systemd-networkd[809]: lo: Link UP Sep 9 05:41:24.677746 systemd-networkd[809]: lo: Gained carrier Sep 9 05:41:24.680329 systemd-networkd[809]: Enumeration completed Sep 9 05:41:24.680862 systemd-networkd[809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:41:24.680870 systemd-networkd[809]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:41:24.683137 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:41:24.728206 ignition[764]: Ignition 2.22.0 Sep 9 05:41:24.684416 systemd-networkd[809]: eth0: Link UP Sep 9 05:41:24.728215 ignition[764]: Stage: fetch-offline Sep 9 05:41:24.684741 systemd-networkd[809]: eth0: Gained carrier Sep 9 05:41:24.728294 ignition[764]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:24.684769 systemd-networkd[809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:41:24.728307 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:41:24.697774 systemd-networkd[809]: eth0: Overlong DHCP hostname received, shortened from 'ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5.c.flatcar-212911.internal' to 'ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5' Sep 9 05:41:24.728435 ignition[764]: parsed url from cmdline: "" Sep 9 05:41:24.697796 systemd-networkd[809]: eth0: DHCPv4 address 10.128.0.68/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 9 05:41:24.728439 ignition[764]: no config URL provided Sep 9 05:41:24.727620 systemd[1]: Reached target network.target - Network. Sep 9 05:41:24.728446 ignition[764]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:41:24.736025 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:41:24.728455 ignition[764]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:41:24.747823 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 05:41:24.728463 ignition[764]: failed to fetch config: resource requires networking Sep 9 05:41:24.817483 unknown[822]: fetched base config from "system" Sep 9 05:41:24.729095 ignition[764]: Ignition finished successfully Sep 9 05:41:24.817495 unknown[822]: fetched base config from "system" Sep 9 05:41:24.802553 ignition[822]: Ignition 2.22.0 Sep 9 05:41:24.817505 unknown[822]: fetched user config from "gcp" Sep 9 05:41:24.802563 ignition[822]: Stage: fetch Sep 9 05:41:24.821465 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 05:41:24.802759 ignition[822]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:24.829174 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:41:24.802772 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:41:24.930603 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:41:24.802883 ignition[822]: parsed url from cmdline: "" Sep 9 05:41:24.946999 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:41:24.802888 ignition[822]: no config URL provided Sep 9 05:41:25.014392 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:41:24.802896 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:41:25.020979 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:41:24.802906 ignition[822]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:41:25.046588 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:41:24.802942 ignition[822]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 9 05:41:25.071620 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:41:24.808397 ignition[822]: GET result: OK Sep 9 05:41:25.088593 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:41:24.808515 ignition[822]: parsing config with SHA512: 936eefe21e90ea35198fd207ee45feab2fab737e55aaa52f7e8863639be3feac8e1bfb821c26c938047b9fd2f5fe51fdc4bdcd7737d1228e0bd816c2b4849677 Sep 9 05:41:25.114606 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:41:24.818029 ignition[822]: fetch: fetch complete Sep 9 05:41:25.124292 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:41:24.818037 ignition[822]: fetch: fetch passed Sep 9 05:41:24.818099 ignition[822]: Ignition finished successfully Sep 9 05:41:24.926534 ignition[828]: Ignition 2.22.0 Sep 9 05:41:24.926544 ignition[828]: Stage: kargs Sep 9 05:41:24.926722 ignition[828]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:24.926738 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:41:24.927909 ignition[828]: kargs: kargs passed Sep 9 05:41:24.927984 ignition[828]: Ignition finished successfully Sep 9 05:41:25.010689 ignition[835]: Ignition 2.22.0 Sep 9 05:41:25.010699 ignition[835]: Stage: disks Sep 9 05:41:25.010877 ignition[835]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:25.010888 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:41:25.011966 ignition[835]: disks: disks passed Sep 9 05:41:25.012034 ignition[835]: Ignition finished successfully Sep 9 05:41:25.188785 systemd-fsck[843]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 9 05:41:25.288228 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:41:25.289785 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:41:25.492608 kernel: EXT4-fs (sda9): mounted filesystem d2d7815e-fa16-4396-ab9d-ac540c1d8856 r/w with ordered data mode. Quota mode: none. Sep 9 05:41:25.492499 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:41:25.501159 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:41:25.518716 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:41:25.536270 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:41:25.555975 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 05:41:25.595524 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (851) Sep 9 05:41:25.595575 kernel: BTRFS info (device sda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:41:25.595599 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:41:25.556367 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:41:25.631556 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 05:41:25.631596 kernel: BTRFS info (device sda6): turning on async discard Sep 9 05:41:25.631612 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 05:41:25.556412 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:41:25.643519 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:41:25.657574 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:41:25.673576 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:41:25.812224 initrd-setup-root[875]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:41:25.822421 initrd-setup-root[882]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:41:25.830848 initrd-setup-root[889]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:41:25.839415 initrd-setup-root[896]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:41:25.966919 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:41:25.986170 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:41:25.994603 systemd-networkd[809]: eth0: Gained IPv6LL Sep 9 05:41:25.996167 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:41:26.026525 kernel: BTRFS info (device sda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:41:26.035558 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:41:26.071087 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:41:26.078440 ignition[963]: INFO : Ignition 2.22.0 Sep 9 05:41:26.078440 ignition[963]: INFO : Stage: mount Sep 9 05:41:26.078440 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:26.078440 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:41:26.120652 ignition[963]: INFO : mount: mount passed Sep 9 05:41:26.120652 ignition[963]: INFO : Ignition finished successfully Sep 9 05:41:26.084915 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:41:26.099967 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:41:26.145887 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:41:26.196268 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (976) Sep 9 05:41:26.213638 kernel: BTRFS info (device sda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:41:26.213738 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:41:26.231279 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 05:41:26.231360 kernel: BTRFS info (device sda6): turning on async discard Sep 9 05:41:26.231386 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 05:41:26.238802 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:41:26.281719 ignition[993]: INFO : Ignition 2.22.0 Sep 9 05:41:26.281719 ignition[993]: INFO : Stage: files Sep 9 05:41:26.295375 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:26.295375 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:41:26.295375 ignition[993]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:41:26.295375 ignition[993]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:41:26.295375 ignition[993]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:41:26.295375 ignition[993]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:41:26.295375 ignition[993]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:41:26.295375 ignition[993]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:41:26.295375 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 05:41:26.295375 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 05:41:26.291005 unknown[993]: wrote ssh authorized keys file for user: core Sep 9 05:41:26.420341 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:41:27.291040 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 05:41:27.306409 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:41:27.306409 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 05:41:27.494099 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 05:41:27.637973 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:41:27.637973 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:41:27.666421 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 05:41:27.953176 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 05:41:28.327482 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:41:28.327482 ignition[993]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 05:41:28.363500 ignition[993]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:41:28.363500 ignition[993]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:41:28.363500 ignition[993]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 05:41:28.363500 ignition[993]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:41:28.363500 ignition[993]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:41:28.363500 ignition[993]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:41:28.363500 ignition[993]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:41:28.363500 ignition[993]: INFO : files: files passed Sep 9 05:41:28.363500 ignition[993]: INFO : Ignition finished successfully Sep 9 05:41:28.335122 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:41:28.347011 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:41:28.373519 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:41:28.423830 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:41:28.572503 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:41:28.572503 initrd-setup-root-after-ignition[1022]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:41:28.424014 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:41:28.633407 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:41:28.444237 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:41:28.455499 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:41:28.476347 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:41:28.551564 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:41:28.551696 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:41:28.564024 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:41:28.581395 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:41:28.601591 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:41:28.602850 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:41:28.666928 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:41:28.688281 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:41:28.735003 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:41:28.755558 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:41:28.775592 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:41:28.794555 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:41:28.794738 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:41:28.823577 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:41:28.841553 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:41:28.857534 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:41:28.875508 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:41:28.895548 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:41:28.914563 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:41:28.931499 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:41:28.949589 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:41:28.968571 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:41:28.986532 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:41:29.004681 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:41:29.020522 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:41:29.020724 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:41:29.048607 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:41:29.066517 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:41:29.082513 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:41:29.082655 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:41:29.101547 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:41:29.101705 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:41:29.128691 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:41:29.128932 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:41:29.233412 ignition[1047]: INFO : Ignition 2.22.0 Sep 9 05:41:29.233412 ignition[1047]: INFO : Stage: umount Sep 9 05:41:29.233412 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:29.233412 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 9 05:41:29.233412 ignition[1047]: INFO : umount: umount passed Sep 9 05:41:29.233412 ignition[1047]: INFO : Ignition finished successfully Sep 9 05:41:29.148680 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:41:29.148865 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:41:29.167763 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:41:29.184366 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:41:29.184627 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:41:29.225045 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:41:29.240504 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:41:29.240720 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:41:29.279708 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:41:29.280021 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:41:29.308306 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:41:29.309671 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:41:29.309787 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:41:29.323037 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:41:29.323153 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:41:29.343606 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:41:29.343727 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:41:29.362127 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:41:29.362316 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:41:29.368582 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:41:29.368637 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:41:29.385627 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 05:41:29.385695 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 05:41:29.410705 systemd[1]: Stopped target network.target - Network. Sep 9 05:41:29.419712 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:41:29.419823 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:41:29.445742 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:41:29.454757 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:41:29.458505 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:41:29.480623 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:41:29.488848 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:41:29.519690 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:41:29.519775 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:41:29.535652 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:41:29.535728 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:41:29.542736 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:41:29.542842 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:41:29.575765 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:41:29.575863 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:41:29.583779 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:41:29.583882 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:41:29.600037 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:41:29.615893 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:41:29.641154 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:41:29.641356 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:41:29.661337 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:41:29.661663 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:41:29.661938 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:41:29.677210 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:41:29.678933 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:41:29.682683 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:41:29.682743 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:41:29.711113 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:41:29.727428 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:41:29.727588 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:41:29.753705 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:41:29.753803 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:41:29.765946 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:41:29.766032 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:41:29.790702 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:41:29.790813 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:41:29.808849 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:41:30.220470 systemd-journald[208]: Received SIGTERM from PID 1 (systemd). Sep 9 05:41:29.817855 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:41:29.817959 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:41:29.824832 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:41:29.825115 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:41:29.855967 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:41:29.856103 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:41:29.872324 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:41:29.872426 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:41:29.888738 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:41:29.888801 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:41:29.906624 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:41:29.906726 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:41:29.933752 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:41:29.933856 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:41:29.967731 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:41:29.967849 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:41:30.005202 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:41:30.020418 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:41:30.020669 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:41:30.049932 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:41:30.050044 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:41:30.058802 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:41:30.058886 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:41:30.087627 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 05:41:30.087708 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 05:41:30.087758 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:41:30.088446 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:41:30.088590 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:41:30.106184 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:41:30.123775 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:41:30.165870 systemd[1]: Switching root. Sep 9 05:41:30.514455 systemd-journald[208]: Journal stopped Sep 9 05:41:33.164138 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:41:33.164203 kernel: SELinux: policy capability open_perms=1 Sep 9 05:41:33.164225 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:41:33.164725 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:41:33.165384 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:41:33.165412 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:41:33.165443 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:41:33.165465 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:41:33.165486 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:41:33.165505 kernel: audit: type=1403 audit(1757396490.948:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:41:33.165529 systemd[1]: Successfully loaded SELinux policy in 126.835ms. Sep 9 05:41:33.165549 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.725ms. Sep 9 05:41:33.165571 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:41:33.165598 systemd[1]: Detected virtualization google. Sep 9 05:41:33.165618 systemd[1]: Detected architecture x86-64. Sep 9 05:41:33.165639 systemd[1]: Detected first boot. Sep 9 05:41:33.165659 systemd[1]: Initializing machine ID from random generator. Sep 9 05:41:33.165679 zram_generator::config[1089]: No configuration found. Sep 9 05:41:33.165706 kernel: Guest personality initialized and is inactive Sep 9 05:41:33.165724 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 05:41:33.165743 kernel: Initialized host personality Sep 9 05:41:33.165762 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:41:33.165781 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:41:33.165805 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:41:33.165828 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:41:33.165855 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:41:33.165877 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:41:33.165900 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:41:33.165923 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:41:33.165946 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:41:33.165968 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:41:33.165991 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:41:33.166017 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:41:33.166038 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:41:33.166070 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:41:33.166095 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:41:33.166116 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:41:33.166137 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:41:33.166157 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:41:33.166179 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:41:33.166207 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:41:33.166232 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 05:41:33.166270 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:41:33.166292 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:41:33.166313 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:41:33.166335 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:41:33.166357 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:41:33.166379 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:41:33.166405 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:41:33.166427 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:41:33.166448 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:41:33.166469 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:41:33.166498 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:41:33.166520 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:41:33.166541 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:41:33.166568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:41:33.166592 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:41:33.166614 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:41:33.166636 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:41:33.166658 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:41:33.166680 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:41:33.166706 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:41:33.166729 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:33.166751 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:41:33.166779 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:41:33.166801 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:41:33.166824 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:41:33.166847 systemd[1]: Reached target machines.target - Containers. Sep 9 05:41:33.166869 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:41:33.166896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:41:33.166919 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:41:33.166941 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:41:33.166963 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:41:33.166986 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:41:33.167008 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:41:33.167030 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:41:33.167062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:41:33.167088 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:41:33.167115 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:41:33.167138 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:41:33.167160 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:41:33.167192 kernel: loop: module loaded Sep 9 05:41:33.167213 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:41:33.167235 kernel: fuse: init (API version 7.41) Sep 9 05:41:33.168314 kernel: ACPI: bus type drm_connector registered Sep 9 05:41:33.168346 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:41:33.168377 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:41:33.168400 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:41:33.168422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:41:33.168445 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:41:33.168468 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:41:33.168538 systemd-journald[1177]: Collecting audit messages is disabled. Sep 9 05:41:33.168588 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:41:33.168612 systemd-journald[1177]: Journal started Sep 9 05:41:33.168655 systemd-journald[1177]: Runtime Journal (/run/log/journal/1691f758f6474ef2a8dcd89c90c72311) is 8M, max 148.9M, 140.9M free. Sep 9 05:41:31.899602 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:41:31.925158 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 9 05:41:31.925844 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:41:33.188744 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:41:33.188846 systemd[1]: Stopped verity-setup.service. Sep 9 05:41:33.213304 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:33.226328 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:41:33.237065 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:41:33.246690 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:41:33.257678 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:41:33.266683 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:41:33.275686 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:41:33.284719 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:41:33.293990 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:41:33.304882 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:41:33.315848 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:41:33.316212 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:41:33.326818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:41:33.327139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:41:33.337880 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:41:33.338201 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:41:33.347810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:41:33.348155 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:41:33.358877 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:41:33.359224 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:41:33.368786 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:41:33.369082 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:41:33.378867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:41:33.389958 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:41:33.400945 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:41:33.411924 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:41:33.422915 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:41:33.446382 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:41:33.457070 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:41:33.472580 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:41:33.481460 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:41:33.481723 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:41:33.491809 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:41:33.504088 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:41:33.513696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:41:33.524786 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:41:33.535951 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:41:33.546764 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:41:33.549775 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:41:33.559542 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:41:33.566376 systemd-journald[1177]: Time spent on flushing to /var/log/journal/1691f758f6474ef2a8dcd89c90c72311 is 118.789ms for 960 entries. Sep 9 05:41:33.566376 systemd-journald[1177]: System Journal (/var/log/journal/1691f758f6474ef2a8dcd89c90c72311) is 8M, max 584.8M, 576.8M free. Sep 9 05:41:33.752307 systemd-journald[1177]: Received client request to flush runtime journal. Sep 9 05:41:33.752404 kernel: loop0: detected capacity change from 0 to 110984 Sep 9 05:41:33.566081 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:41:33.588574 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:41:33.602571 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:41:33.617624 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:41:33.627721 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:41:33.643488 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:41:33.657578 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:41:33.672463 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:41:33.683042 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:41:33.766471 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:41:33.779350 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:41:33.780969 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:41:33.792309 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:41:33.824633 kernel: loop1: detected capacity change from 0 to 221472 Sep 9 05:41:33.824743 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:41:33.837560 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:41:33.914774 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 9 05:41:33.914816 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 9 05:41:33.934729 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:41:33.968300 kernel: loop2: detected capacity change from 0 to 128016 Sep 9 05:41:34.043294 kernel: loop3: detected capacity change from 0 to 50720 Sep 9 05:41:34.116536 kernel: loop4: detected capacity change from 0 to 110984 Sep 9 05:41:34.160647 kernel: loop5: detected capacity change from 0 to 221472 Sep 9 05:41:34.220316 kernel: loop6: detected capacity change from 0 to 128016 Sep 9 05:41:34.274355 kernel: loop7: detected capacity change from 0 to 50720 Sep 9 05:41:34.309513 (sd-merge)[1235]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 9 05:41:34.311842 (sd-merge)[1235]: Merged extensions into '/usr'. Sep 9 05:41:34.322921 systemd[1]: Reload requested from client PID 1212 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:41:34.323307 systemd[1]: Reloading... Sep 9 05:41:34.480322 zram_generator::config[1258]: No configuration found. Sep 9 05:41:34.650155 ldconfig[1207]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:41:34.958986 systemd[1]: Reloading finished in 634 ms. Sep 9 05:41:34.975559 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:41:34.985193 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:41:35.009488 systemd[1]: Starting ensure-sysext.service... Sep 9 05:41:35.020443 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:41:35.055848 systemd[1]: Reload requested from client PID 1301 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:41:35.055882 systemd[1]: Reloading... Sep 9 05:41:35.112500 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:41:35.113151 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:41:35.113868 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:41:35.114558 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:41:35.118479 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:41:35.119292 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Sep 9 05:41:35.119441 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Sep 9 05:41:35.128876 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:41:35.129105 systemd-tmpfiles[1302]: Skipping /boot Sep 9 05:41:35.163947 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:41:35.163987 systemd-tmpfiles[1302]: Skipping /boot Sep 9 05:41:35.218308 zram_generator::config[1332]: No configuration found. Sep 9 05:41:35.453989 systemd[1]: Reloading finished in 397 ms. Sep 9 05:41:35.470536 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:41:35.497617 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:41:35.518851 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:41:35.536188 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:41:35.554417 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:41:35.573603 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:41:35.585616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:41:35.599188 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:41:35.617364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:35.617707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:41:35.621856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:41:35.633040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:41:35.645769 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:41:35.654604 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:41:35.655067 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:41:35.660213 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:41:35.670346 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:35.680365 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:41:35.696895 augenrules[1400]: No rules Sep 9 05:41:35.699468 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:41:35.700536 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:41:35.712342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:41:35.712701 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:41:35.718343 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Sep 9 05:41:35.723816 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:41:35.724178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:41:35.735886 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:41:35.736181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:41:35.747070 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:41:35.763361 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:41:35.778040 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:41:35.796555 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:41:35.821301 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:35.823574 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:41:35.831641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:41:35.837429 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:41:35.850525 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:41:35.862534 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:41:35.882422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:41:35.895382 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 9 05:41:35.902489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:41:35.902567 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:41:35.919141 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:41:35.928410 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:41:35.934403 augenrules[1428]: /sbin/augenrules: No change Sep 9 05:41:35.939660 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:41:35.948394 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:41:35.948455 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:35.950000 systemd[1]: Finished ensure-sysext.service. Sep 9 05:41:35.957891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:41:35.958191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:41:35.970036 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:41:35.973886 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:41:35.983015 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:41:35.984593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:41:35.990545 augenrules[1470]: No rules Sep 9 05:41:35.995922 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:41:35.996277 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:41:36.005840 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:41:36.006136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:41:36.019514 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:41:36.051647 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 9 05:41:36.064647 systemd-resolved[1380]: Positive Trust Anchors: Sep 9 05:41:36.064677 systemd-resolved[1380]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:41:36.064742 systemd-resolved[1380]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:41:36.071791 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 9 05:41:36.081357 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:41:36.081445 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:41:36.081523 systemd-resolved[1380]: Defaulting to hostname 'linux'. Sep 9 05:41:36.086642 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:41:36.098997 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:41:36.109399 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:41:36.118525 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:41:36.128430 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:41:36.138422 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 05:41:36.148604 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:41:36.157658 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:41:36.168414 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:41:36.178411 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:41:36.178475 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:41:36.186392 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:41:36.196506 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:41:36.208610 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:41:36.222178 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:41:36.234670 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:41:36.244443 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:41:36.254802 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:41:36.270302 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 9 05:41:36.280621 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:41:36.294966 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Sep 9 05:41:36.299776 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 05:41:36.302165 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 9 05:41:36.304937 systemd-networkd[1457]: lo: Link UP Sep 9 05:41:36.304953 systemd-networkd[1457]: lo: Gained carrier Sep 9 05:41:36.312920 systemd-networkd[1457]: Enumeration completed Sep 9 05:41:36.316167 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:41:36.316185 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:41:36.317626 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:41:36.319288 systemd-networkd[1457]: eth0: Link UP Sep 9 05:41:36.319579 systemd-networkd[1457]: eth0: Gained carrier Sep 9 05:41:36.319607 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:41:36.323800 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:41:36.332435 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:41:36.332595 systemd-networkd[1457]: eth0: Overlong DHCP hostname received, shortened from 'ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5.c.flatcar-212911.internal' to 'ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5' Sep 9 05:41:36.332613 systemd-networkd[1457]: eth0: DHCPv4 address 10.128.0.68/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 9 05:41:36.341478 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:41:36.352294 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 9 05:41:36.359265 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 05:41:36.362556 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:41:36.362955 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:41:36.367682 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 05:41:36.371291 kernel: ACPI: button: Power Button [PWRF] Sep 9 05:41:36.380265 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 9 05:41:36.386279 kernel: ACPI: button: Sleep Button [SLPF] Sep 9 05:41:36.393428 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:41:36.405323 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 9 05:41:36.415033 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:41:36.429928 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:41:36.454477 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:41:36.463380 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:41:36.468550 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 05:41:36.483437 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:41:36.496569 systemd[1]: Started ntpd.service - Network Time Service. Sep 9 05:41:36.498755 jq[1511]: false Sep 9 05:41:36.510117 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:41:36.528540 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:41:36.566304 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing passwd entry cache Sep 9 05:41:36.566326 oslogin_cache_refresh[1513]: Refreshing passwd entry cache Sep 9 05:41:36.583762 extend-filesystems[1512]: Found /dev/sda6 Sep 9 05:41:36.600513 extend-filesystems[1512]: Found /dev/sda9 Sep 9 05:41:36.599869 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:41:36.586890 oslogin_cache_refresh[1513]: Failure getting users, quitting Sep 9 05:41:36.607810 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting users, quitting Sep 9 05:41:36.607810 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:41:36.607810 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing group entry cache Sep 9 05:41:36.607810 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting groups, quitting Sep 9 05:41:36.607810 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:41:36.586923 oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:41:36.587000 oslogin_cache_refresh[1513]: Refreshing group entry cache Sep 9 05:41:36.590644 oslogin_cache_refresh[1513]: Failure getting groups, quitting Sep 9 05:41:36.590667 oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:41:36.610369 extend-filesystems[1512]: Checking size of /dev/sda9 Sep 9 05:41:36.625527 coreos-metadata[1508]: Sep 09 05:41:36.625 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 9 05:41:36.626857 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:41:36.627941 coreos-metadata[1508]: Sep 09 05:41:36.627 INFO Fetch successful Sep 9 05:41:36.628104 coreos-metadata[1508]: Sep 09 05:41:36.628 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 9 05:41:36.631379 coreos-metadata[1508]: Sep 09 05:41:36.629 INFO Fetch successful Sep 9 05:41:36.631379 coreos-metadata[1508]: Sep 09 05:41:36.629 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 9 05:41:36.632817 coreos-metadata[1508]: Sep 09 05:41:36.632 INFO Fetch successful Sep 9 05:41:36.632938 coreos-metadata[1508]: Sep 09 05:41:36.632 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 9 05:41:36.634407 coreos-metadata[1508]: Sep 09 05:41:36.634 INFO Fetch successful Sep 9 05:41:36.638082 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 9 05:41:36.640162 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:41:36.646751 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:41:36.659170 extend-filesystems[1512]: Resized partition /dev/sda9 Sep 9 05:41:36.667713 extend-filesystems[1540]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:41:36.693513 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 9 05:41:36.662656 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:41:36.687156 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:41:36.707820 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 9 05:41:36.717396 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:41:36.721361 extend-filesystems[1540]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 9 05:41:36.721361 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 9 05:41:36.721361 extend-filesystems[1540]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 9 05:41:36.755503 ntpd[1517]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 03:09:56 UTC 2025 (1): Starting Sep 9 05:41:36.766982 jq[1539]: true Sep 9 05:41:36.729630 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:41:36.767633 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 03:09:56 UTC 2025 (1): Starting Sep 9 05:41:36.767633 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 05:41:36.767633 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: ---------------------------------------------------- Sep 9 05:41:36.767633 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: ntp-4 is maintained by Network Time Foundation, Sep 9 05:41:36.767633 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 05:41:36.767633 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: corporation. Support and training for ntp-4 are Sep 9 05:41:36.767633 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: available at https://www.nwtime.org/support Sep 9 05:41:36.767633 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: ---------------------------------------------------- Sep 9 05:41:36.768153 extend-filesystems[1512]: Resized filesystem in /dev/sda9 Sep 9 05:41:36.755538 ntpd[1517]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 05:41:36.730014 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:41:36.775940 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: proto: precision = 0.076 usec (-24) Sep 9 05:41:36.775940 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: basedate set to 2025-08-28 Sep 9 05:41:36.775940 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: gps base set to 2025-08-31 (week 2382) Sep 9 05:41:36.755553 ntpd[1517]: ---------------------------------------------------- Sep 9 05:41:36.730592 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:41:36.755567 ntpd[1517]: ntp-4 is maintained by Network Time Foundation, Sep 9 05:41:36.731490 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:41:36.755580 ntpd[1517]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 05:41:36.767316 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 05:41:36.755595 ntpd[1517]: corporation. Support and training for ntp-4 are Sep 9 05:41:36.768733 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 05:41:36.755608 ntpd[1517]: available at https://www.nwtime.org/support Sep 9 05:41:36.755622 ntpd[1517]: ---------------------------------------------------- Sep 9 05:41:36.771214 ntpd[1517]: proto: precision = 0.076 usec (-24) Sep 9 05:41:36.773407 ntpd[1517]: basedate set to 2025-08-28 Sep 9 05:41:36.773433 ntpd[1517]: gps base set to 2025-08-31 (week 2382) Sep 9 05:41:36.782313 ntpd[1517]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 05:41:36.783384 update_engine[1536]: I20250909 05:41:36.782727 1536 main.cc:92] Flatcar Update Engine starting Sep 9 05:41:36.783691 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 05:41:36.783691 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 05:41:36.783691 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 05:41:36.783691 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: Listen normally on 3 eth0 10.128.0.68:123 Sep 9 05:41:36.783691 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: Listen normally on 4 lo [::1]:123 Sep 9 05:41:36.783691 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: bind(21) AF_INET6 fe80::4001:aff:fe80:44%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:41:36.783691 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:44%2#123 Sep 9 05:41:36.783691 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: failed to init interface for address fe80::4001:aff:fe80:44%2 Sep 9 05:41:36.783691 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: Listening on routing socket on fd #21 for interface updates Sep 9 05:41:36.782390 ntpd[1517]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 05:41:36.782694 ntpd[1517]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 05:41:36.782751 ntpd[1517]: Listen normally on 3 eth0 10.128.0.68:123 Sep 9 05:41:36.782824 ntpd[1517]: Listen normally on 4 lo [::1]:123 Sep 9 05:41:36.782889 ntpd[1517]: bind(21) AF_INET6 fe80::4001:aff:fe80:44%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:41:36.782920 ntpd[1517]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:44%2#123 Sep 9 05:41:36.782941 ntpd[1517]: failed to init interface for address fe80::4001:aff:fe80:44%2 Sep 9 05:41:36.782987 ntpd[1517]: Listening on routing socket on fd #21 for interface updates Sep 9 05:41:36.785002 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:41:36.786418 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:41:36.794512 ntpd[1517]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:41:36.794565 ntpd[1517]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:41:36.794700 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:41:36.794700 ntpd[1517]: 9 Sep 05:41:36 ntpd[1517]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:41:36.799927 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:41:36.801362 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:41:36.875121 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 05:41:36.889033 kernel: EDAC MC: Ver: 3.0.0 Sep 9 05:41:36.971372 jq[1560]: true Sep 9 05:41:37.003476 systemd[1]: Reached target network.target - Network. Sep 9 05:41:37.018687 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:41:37.027521 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:41:37.038762 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:41:37.056174 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:41:37.073421 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:41:37.080793 tar[1558]: linux-amd64/helm Sep 9 05:41:37.097769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 9 05:41:37.131813 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:41:37.220695 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:41:37.253356 bash[1602]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:41:37.252654 (ntainerd)[1603]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:41:37.257979 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:41:37.283537 systemd[1]: Starting sshkeys.service... Sep 9 05:41:37.373992 dbus-daemon[1509]: [system] SELinux support is enabled Sep 9 05:41:37.376942 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:41:37.391791 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:41:37.392288 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:41:37.392535 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:41:37.392735 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:41:37.402984 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 05:41:37.412863 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 05:41:37.430380 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:41:37.445903 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1457 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 9 05:41:37.448864 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:41:37.469289 update_engine[1536]: I20250909 05:41:37.462006 1536 update_check_scheduler.cc:74] Next update check in 8m46s Sep 9 05:41:37.462971 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 9 05:41:37.464216 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:41:37.497843 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:41:37.615905 coreos-metadata[1612]: Sep 09 05:41:37.609 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 9 05:41:37.635619 coreos-metadata[1612]: Sep 09 05:41:37.633 INFO Fetch failed with 404: resource not found Sep 9 05:41:37.635619 coreos-metadata[1612]: Sep 09 05:41:37.633 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 9 05:41:37.635619 coreos-metadata[1612]: Sep 09 05:41:37.635 INFO Fetch successful Sep 9 05:41:37.635619 coreos-metadata[1612]: Sep 09 05:41:37.635 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 9 05:41:37.644165 coreos-metadata[1612]: Sep 09 05:41:37.640 INFO Fetch failed with 404: resource not found Sep 9 05:41:37.644165 coreos-metadata[1612]: Sep 09 05:41:37.640 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 9 05:41:37.644165 coreos-metadata[1612]: Sep 09 05:41:37.640 INFO Fetch failed with 404: resource not found Sep 9 05:41:37.644165 coreos-metadata[1612]: Sep 09 05:41:37.643 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 9 05:41:37.648525 coreos-metadata[1612]: Sep 09 05:41:37.648 INFO Fetch successful Sep 9 05:41:37.657668 unknown[1612]: wrote ssh authorized keys file for user: core Sep 9 05:41:37.756187 ntpd[1517]: bind(24) AF_INET6 fe80::4001:aff:fe80:44%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:41:37.762205 ntpd[1517]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:44%2#123 Sep 9 05:41:37.762956 ntpd[1517]: 9 Sep 05:41:37 ntpd[1517]: bind(24) AF_INET6 fe80::4001:aff:fe80:44%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:41:37.762956 ntpd[1517]: 9 Sep 05:41:37 ntpd[1517]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:44%2#123 Sep 9 05:41:37.762956 ntpd[1517]: 9 Sep 05:41:37 ntpd[1517]: failed to init interface for address fe80::4001:aff:fe80:44%2 Sep 9 05:41:37.761891 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:41:37.762233 ntpd[1517]: failed to init interface for address fe80::4001:aff:fe80:44%2 Sep 9 05:41:37.787804 update-ssh-keys[1622]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:41:37.790989 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 05:41:37.813214 systemd[1]: Finished sshkeys.service. Sep 9 05:41:37.869906 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:41:37.942551 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:41:37.956236 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:41:37.976668 systemd[1]: Started sshd@0-10.128.0.68:22-139.178.89.65:42230.service - OpenSSH per-connection server daemon (139.178.89.65:42230). Sep 9 05:41:38.070742 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:41:38.071458 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:41:38.085682 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:41:38.181122 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:41:38.190696 systemd-logind[1533]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 05:41:38.190745 systemd-logind[1533]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 9 05:41:38.190781 systemd-logind[1533]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 05:41:38.191524 systemd-logind[1533]: New seat seat0. Sep 9 05:41:38.201874 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:41:38.206662 locksmithd[1618]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:41:38.214827 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 05:41:38.224485 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:41:38.232701 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:41:38.257549 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 9 05:41:38.258704 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 9 05:41:38.259692 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1615 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 9 05:41:38.278738 systemd[1]: Starting polkit.service - Authorization Manager... Sep 9 05:41:38.282541 systemd-networkd[1457]: eth0: Gained IPv6LL Sep 9 05:41:38.296584 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:41:38.307150 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:41:38.323841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:38.333213 containerd[1603]: time="2025-09-09T05:41:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:41:38.337302 containerd[1603]: time="2025-09-09T05:41:38.336697637Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:41:38.338396 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:41:38.352798 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 9 05:41:38.431460 init.sh[1660]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 9 05:41:38.435508 init.sh[1660]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 9 05:41:38.439666 init.sh[1660]: + /usr/bin/google_instance_setup Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.481940988Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="21.548µs" Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.482000385Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.482037571Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.482342419Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.482386760Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.482438927Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.482535468Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.482558168Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.482980490Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.483009127Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.483031526Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:41:38.485952 containerd[1603]: time="2025-09-09T05:41:38.483046408Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:41:38.487137 containerd[1603]: time="2025-09-09T05:41:38.483176775Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:41:38.497214 containerd[1603]: time="2025-09-09T05:41:38.494305839Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:41:38.497214 containerd[1603]: time="2025-09-09T05:41:38.494426794Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:41:38.497214 containerd[1603]: time="2025-09-09T05:41:38.494451231Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:41:38.497214 containerd[1603]: time="2025-09-09T05:41:38.494523833Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:41:38.497214 containerd[1603]: time="2025-09-09T05:41:38.494981572Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:41:38.497214 containerd[1603]: time="2025-09-09T05:41:38.495091377Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.514809629Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.514964982Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.515067821Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.515097709Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.515140584Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.515170074Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.515225372Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.515279071Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.515300899Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.515319282Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.515354074Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:41:38.515692 containerd[1603]: time="2025-09-09T05:41:38.515378648Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:41:38.519275 containerd[1603]: time="2025-09-09T05:41:38.516374193Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:41:38.519275 containerd[1603]: time="2025-09-09T05:41:38.517641605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:41:38.519275 containerd[1603]: time="2025-09-09T05:41:38.517706153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:41:38.519275 containerd[1603]: time="2025-09-09T05:41:38.517760663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:41:38.519275 containerd[1603]: time="2025-09-09T05:41:38.517782292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:41:38.519275 containerd[1603]: time="2025-09-09T05:41:38.517802563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:41:38.519275 containerd[1603]: time="2025-09-09T05:41:38.519096878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:41:38.519275 containerd[1603]: time="2025-09-09T05:41:38.519153627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:41:38.519275 containerd[1603]: time="2025-09-09T05:41:38.519179548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:41:38.519275 containerd[1603]: time="2025-09-09T05:41:38.519203435Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:41:38.519275 containerd[1603]: time="2025-09-09T05:41:38.519222778Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:41:38.521281 containerd[1603]: time="2025-09-09T05:41:38.520829094Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:41:38.521281 containerd[1603]: time="2025-09-09T05:41:38.520873839Z" level=info msg="Start snapshots syncer" Sep 9 05:41:38.521281 containerd[1603]: time="2025-09-09T05:41:38.520909544Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:41:38.526951 containerd[1603]: time="2025-09-09T05:41:38.523218474Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:41:38.526951 containerd[1603]: time="2025-09-09T05:41:38.525478205Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:41:38.527338 containerd[1603]: time="2025-09-09T05:41:38.525711485Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:41:38.527874 containerd[1603]: time="2025-09-09T05:41:38.527615674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:41:38.527874 containerd[1603]: time="2025-09-09T05:41:38.527687812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:41:38.527874 containerd[1603]: time="2025-09-09T05:41:38.527711214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:41:38.527874 containerd[1603]: time="2025-09-09T05:41:38.527731075Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:41:38.527874 containerd[1603]: time="2025-09-09T05:41:38.527762452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:41:38.527874 containerd[1603]: time="2025-09-09T05:41:38.527782863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:41:38.527874 containerd[1603]: time="2025-09-09T05:41:38.527803529Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.527854499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.528290608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.528340508Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.528413984Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.528441420Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.528457259Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.528474858Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.528490205Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.528526361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.528548311Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.528580232Z" level=info msg="runtime interface created" Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.528590271Z" level=info msg="created NRI interface" Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.530298441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.530419227Z" level=info msg="Connect containerd service" Sep 9 05:41:38.530550 containerd[1603]: time="2025-09-09T05:41:38.530479293Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:41:38.535563 containerd[1603]: time="2025-09-09T05:41:38.533957862Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:41:38.543796 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:41:38.636106 sshd[1645]: Accepted publickey for core from 139.178.89.65 port 42230 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:41:38.646694 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:38.700178 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:41:38.712441 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:41:38.786853 systemd-logind[1533]: New session 1 of user core. Sep 9 05:41:38.805857 tar[1558]: linux-amd64/LICENSE Sep 9 05:41:38.805857 tar[1558]: linux-amd64/README.md Sep 9 05:41:38.808977 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:41:38.831559 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:41:38.861134 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:41:38.897413 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:41:38.908031 polkitd[1655]: Started polkitd version 126 Sep 9 05:41:38.913024 systemd-logind[1533]: New session c1 of user core. Sep 9 05:41:38.933857 polkitd[1655]: Loading rules from directory /etc/polkit-1/rules.d Sep 9 05:41:38.936016 polkitd[1655]: Loading rules from directory /run/polkit-1/rules.d Sep 9 05:41:38.938325 polkitd[1655]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 05:41:38.940202 containerd[1603]: time="2025-09-09T05:41:38.939161235Z" level=info msg="Start subscribing containerd event" Sep 9 05:41:38.940574 containerd[1603]: time="2025-09-09T05:41:38.940458369Z" level=info msg="Start recovering state" Sep 9 05:41:38.943297 containerd[1603]: time="2025-09-09T05:41:38.941219161Z" level=info msg="Start event monitor" Sep 9 05:41:38.947629 containerd[1603]: time="2025-09-09T05:41:38.944210693Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:41:38.947629 containerd[1603]: time="2025-09-09T05:41:38.944276668Z" level=info msg="Start streaming server" Sep 9 05:41:38.947629 containerd[1603]: time="2025-09-09T05:41:38.944299305Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:41:38.947629 containerd[1603]: time="2025-09-09T05:41:38.944312596Z" level=info msg="runtime interface starting up..." Sep 9 05:41:38.947629 containerd[1603]: time="2025-09-09T05:41:38.944323547Z" level=info msg="starting plugins..." Sep 9 05:41:38.947629 containerd[1603]: time="2025-09-09T05:41:38.944354253Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:41:38.947629 containerd[1603]: time="2025-09-09T05:41:38.944888120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:41:38.947629 containerd[1603]: time="2025-09-09T05:41:38.945001102Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:41:38.947629 containerd[1603]: time="2025-09-09T05:41:38.945120714Z" level=info msg="containerd successfully booted in 0.612597s" Sep 9 05:41:38.945607 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:41:38.945082 polkitd[1655]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 9 05:41:38.945147 polkitd[1655]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 05:41:38.945208 polkitd[1655]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 9 05:41:38.951318 polkitd[1655]: Finished loading, compiling and executing 2 rules Sep 9 05:41:38.954953 systemd[1]: Started polkit.service - Authorization Manager. Sep 9 05:41:38.956756 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 9 05:41:38.959746 polkitd[1655]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 9 05:41:39.006231 systemd-hostnamed[1615]: Hostname set to (transient) Sep 9 05:41:39.007905 systemd-resolved[1380]: System hostname changed to 'ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5'. Sep 9 05:41:39.261948 systemd[1684]: Queued start job for default target default.target. Sep 9 05:41:39.274639 systemd[1684]: Created slice app.slice - User Application Slice. Sep 9 05:41:39.275159 systemd[1684]: Reached target paths.target - Paths. Sep 9 05:41:39.275295 systemd[1684]: Reached target timers.target - Timers. Sep 9 05:41:39.279512 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:41:39.316855 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:41:39.317067 systemd[1684]: Reached target sockets.target - Sockets. Sep 9 05:41:39.317146 systemd[1684]: Reached target basic.target - Basic System. Sep 9 05:41:39.317217 systemd[1684]: Reached target default.target - Main User Target. Sep 9 05:41:39.317292 systemd[1684]: Startup finished in 375ms. Sep 9 05:41:39.317492 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:41:39.334770 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:41:39.583637 systemd[1]: Started sshd@1-10.128.0.68:22-139.178.89.65:42238.service - OpenSSH per-connection server daemon (139.178.89.65:42238). Sep 9 05:41:39.618606 instance-setup[1664]: INFO Running google_set_multiqueue. Sep 9 05:41:39.656594 instance-setup[1664]: INFO Set channels for eth0 to 2. Sep 9 05:41:39.669343 instance-setup[1664]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 9 05:41:39.672793 instance-setup[1664]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 9 05:41:39.673684 instance-setup[1664]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 9 05:41:39.675858 instance-setup[1664]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 9 05:41:39.676193 instance-setup[1664]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 9 05:41:39.679612 instance-setup[1664]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 9 05:41:39.680345 instance-setup[1664]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 9 05:41:39.682395 instance-setup[1664]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 9 05:41:39.694566 instance-setup[1664]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 9 05:41:39.701471 instance-setup[1664]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 9 05:41:39.703917 instance-setup[1664]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 9 05:41:39.704434 instance-setup[1664]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 9 05:41:39.741399 init.sh[1660]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 9 05:41:39.935213 startup-script[1741]: INFO Starting startup scripts. Sep 9 05:41:39.942004 startup-script[1741]: INFO No startup scripts found in metadata. Sep 9 05:41:39.942082 startup-script[1741]: INFO Finished running startup scripts. Sep 9 05:41:39.947116 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 42238 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:41:39.952072 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:39.967321 systemd-logind[1533]: New session 2 of user core. Sep 9 05:41:39.974107 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:41:39.979401 init.sh[1660]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 9 05:41:39.979518 init.sh[1660]: + daemon_pids=() Sep 9 05:41:39.979518 init.sh[1660]: + for d in accounts clock_skew network Sep 9 05:41:39.979993 init.sh[1660]: + daemon_pids+=($!) Sep 9 05:41:39.979993 init.sh[1660]: + for d in accounts clock_skew network Sep 9 05:41:39.980315 init.sh[1660]: + daemon_pids+=($!) Sep 9 05:41:39.980376 init.sh[1660]: + for d in accounts clock_skew network Sep 9 05:41:39.980789 init.sh[1744]: + /usr/bin/google_accounts_daemon Sep 9 05:41:39.981384 init.sh[1745]: + /usr/bin/google_clock_skew_daemon Sep 9 05:41:39.983192 init.sh[1746]: + /usr/bin/google_network_daemon Sep 9 05:41:39.984325 init.sh[1660]: + daemon_pids+=($!) Sep 9 05:41:39.984325 init.sh[1660]: + NOTIFY_SOCKET=/run/systemd/notify Sep 9 05:41:39.984325 init.sh[1660]: + /usr/bin/systemd-notify --ready Sep 9 05:41:40.021806 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 9 05:41:40.038272 init.sh[1660]: + wait -n 1744 1745 1746 Sep 9 05:41:40.188860 sshd[1748]: Connection closed by 139.178.89.65 port 42238 Sep 9 05:41:40.189574 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:40.204566 systemd[1]: sshd@1-10.128.0.68:22-139.178.89.65:42238.service: Deactivated successfully. Sep 9 05:41:40.210789 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:41:40.219875 systemd-logind[1533]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:41:40.222843 systemd-logind[1533]: Removed session 2. Sep 9 05:41:40.432457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:40.443148 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:41:40.451227 google-clock-skew[1745]: INFO Starting Google Clock Skew daemon. Sep 9 05:41:40.452898 systemd[1]: Startup finished in 3.766s (kernel) + 11.046s (initrd) + 9.619s (userspace) = 24.432s. Sep 9 05:41:40.464146 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:41:40.465640 google-clock-skew[1745]: INFO Clock drift token has changed: 0. Sep 9 05:41:40.536570 google-networking[1746]: INFO Starting Google Networking daemon. Sep 9 05:41:40.557335 groupadd[1770]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 9 05:41:40.564228 groupadd[1770]: group added to /etc/gshadow: name=google-sudoers Sep 9 05:41:40.623623 groupadd[1770]: new group: name=google-sudoers, GID=1000 Sep 9 05:41:40.662949 google-accounts[1744]: INFO Starting Google Accounts daemon. Sep 9 05:41:40.677548 google-accounts[1744]: WARNING OS Login not installed. Sep 9 05:41:40.679745 google-accounts[1744]: INFO Creating a new user account for 0. Sep 9 05:41:40.685111 init.sh[1783]: useradd: invalid user name '0': use --badname to ignore Sep 9 05:41:40.685528 google-accounts[1744]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 9 05:41:40.756226 ntpd[1517]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:44%2]:123 Sep 9 05:41:40.757028 ntpd[1517]: 9 Sep 05:41:40 ntpd[1517]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:44%2]:123 Sep 9 05:41:41.218373 kubelet[1764]: E0909 05:41:41.218276 1764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:41:41.222547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:41:41.222852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:41:41.223465 systemd[1]: kubelet.service: Consumed 1.382s CPU time, 265.6M memory peak. Sep 9 05:41:41.000282 systemd-resolved[1380]: Clock change detected. Flushing caches. Sep 9 05:41:41.014580 systemd-journald[1177]: Time jumped backwards, rotating. Sep 9 05:41:41.001539 google-clock-skew[1745]: INFO Synced system time with hardware clock. Sep 9 05:41:41.950441 systemd[1]: Started sshd@2-10.128.0.68:22-139.178.89.65:42250.service - OpenSSH per-connection server daemon (139.178.89.65:42250). Sep 9 05:41:42.256461 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 42250 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:41:42.258461 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:42.266538 systemd-logind[1533]: New session 3 of user core. Sep 9 05:41:42.273855 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:41:42.472927 sshd[1792]: Connection closed by 139.178.89.65 port 42250 Sep 9 05:41:42.473800 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:42.478975 systemd[1]: sshd@2-10.128.0.68:22-139.178.89.65:42250.service: Deactivated successfully. Sep 9 05:41:42.481458 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:41:42.484292 systemd-logind[1533]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:41:42.485964 systemd-logind[1533]: Removed session 3. Sep 9 05:41:50.966343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:41:50.968945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:51.346219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:51.361146 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:41:51.416662 kubelet[1805]: E0909 05:41:51.416570 1805 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:41:51.420983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:41:51.421220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:41:51.421811 systemd[1]: kubelet.service: Consumed 202ms CPU time, 111M memory peak. Sep 9 05:41:52.529484 systemd[1]: Started sshd@3-10.128.0.68:22-139.178.89.65:38158.service - OpenSSH per-connection server daemon (139.178.89.65:38158). Sep 9 05:41:52.846794 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 38158 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:41:52.848740 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:52.856663 systemd-logind[1533]: New session 4 of user core. Sep 9 05:41:52.863852 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:41:53.064173 sshd[1816]: Connection closed by 139.178.89.65 port 38158 Sep 9 05:41:53.065309 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:53.072182 systemd[1]: sshd@3-10.128.0.68:22-139.178.89.65:38158.service: Deactivated successfully. Sep 9 05:41:53.074841 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:41:53.076302 systemd-logind[1533]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:41:53.078235 systemd-logind[1533]: Removed session 4. Sep 9 05:41:53.119435 systemd[1]: Started sshd@4-10.128.0.68:22-139.178.89.65:38164.service - OpenSSH per-connection server daemon (139.178.89.65:38164). Sep 9 05:41:53.435356 sshd[1822]: Accepted publickey for core from 139.178.89.65 port 38164 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:41:53.437323 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:53.445756 systemd-logind[1533]: New session 5 of user core. Sep 9 05:41:53.449867 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:41:53.645688 sshd[1825]: Connection closed by 139.178.89.65 port 38164 Sep 9 05:41:53.646619 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:53.652914 systemd[1]: sshd@4-10.128.0.68:22-139.178.89.65:38164.service: Deactivated successfully. Sep 9 05:41:53.655776 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:41:53.657289 systemd-logind[1533]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:41:53.659168 systemd-logind[1533]: Removed session 5. Sep 9 05:41:53.700078 systemd[1]: Started sshd@5-10.128.0.68:22-139.178.89.65:38170.service - OpenSSH per-connection server daemon (139.178.89.65:38170). Sep 9 05:41:54.008966 sshd[1831]: Accepted publickey for core from 139.178.89.65 port 38170 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:41:54.010679 sshd-session[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:54.018163 systemd-logind[1533]: New session 6 of user core. Sep 9 05:41:54.024819 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:41:54.225358 sshd[1834]: Connection closed by 139.178.89.65 port 38170 Sep 9 05:41:54.226241 sshd-session[1831]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:54.231993 systemd[1]: sshd@5-10.128.0.68:22-139.178.89.65:38170.service: Deactivated successfully. Sep 9 05:41:54.234302 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:41:54.235443 systemd-logind[1533]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:41:54.237548 systemd-logind[1533]: Removed session 6. Sep 9 05:41:54.278835 systemd[1]: Started sshd@6-10.128.0.68:22-139.178.89.65:38176.service - OpenSSH per-connection server daemon (139.178.89.65:38176). Sep 9 05:41:54.599103 sshd[1840]: Accepted publickey for core from 139.178.89.65 port 38176 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:41:54.600880 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:54.608426 systemd-logind[1533]: New session 7 of user core. Sep 9 05:41:54.617883 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:41:54.793469 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:41:54.793986 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:54.808324 sudo[1844]: pam_unix(sudo:session): session closed for user root Sep 9 05:41:54.851684 sshd[1843]: Connection closed by 139.178.89.65 port 38176 Sep 9 05:41:54.853173 sshd-session[1840]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:54.859224 systemd[1]: sshd@6-10.128.0.68:22-139.178.89.65:38176.service: Deactivated successfully. Sep 9 05:41:54.861642 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:41:54.862942 systemd-logind[1533]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:41:54.864871 systemd-logind[1533]: Removed session 7. Sep 9 05:41:54.907132 systemd[1]: Started sshd@7-10.128.0.68:22-139.178.89.65:38186.service - OpenSSH per-connection server daemon (139.178.89.65:38186). Sep 9 05:41:55.217443 sshd[1850]: Accepted publickey for core from 139.178.89.65 port 38186 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:41:55.219187 sshd-session[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:55.226660 systemd-logind[1533]: New session 8 of user core. Sep 9 05:41:55.237837 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:41:55.400105 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:41:55.400614 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:55.407441 sudo[1855]: pam_unix(sudo:session): session closed for user root Sep 9 05:41:55.421236 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:41:55.421723 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:55.435174 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:41:55.485625 augenrules[1877]: No rules Sep 9 05:41:55.487670 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:41:55.488014 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:41:55.489976 sudo[1854]: pam_unix(sudo:session): session closed for user root Sep 9 05:41:55.533968 sshd[1853]: Connection closed by 139.178.89.65 port 38186 Sep 9 05:41:55.534818 sshd-session[1850]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:55.540807 systemd[1]: sshd@7-10.128.0.68:22-139.178.89.65:38186.service: Deactivated successfully. Sep 9 05:41:55.543190 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:41:55.544758 systemd-logind[1533]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:41:55.546484 systemd-logind[1533]: Removed session 8. Sep 9 05:41:55.591022 systemd[1]: Started sshd@8-10.128.0.68:22-139.178.89.65:38190.service - OpenSSH per-connection server daemon (139.178.89.65:38190). Sep 9 05:41:55.902828 sshd[1886]: Accepted publickey for core from 139.178.89.65 port 38190 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:41:55.904710 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:55.910564 systemd-logind[1533]: New session 9 of user core. Sep 9 05:41:55.917838 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:41:56.082201 sudo[1890]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:41:56.082722 sudo[1890]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:56.545429 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:41:56.559154 (dockerd)[1908]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:41:56.895940 dockerd[1908]: time="2025-09-09T05:41:56.895561645Z" level=info msg="Starting up" Sep 9 05:41:56.896746 dockerd[1908]: time="2025-09-09T05:41:56.896708730Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:41:56.913068 dockerd[1908]: time="2025-09-09T05:41:56.912959550Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:41:56.963782 dockerd[1908]: time="2025-09-09T05:41:56.963732349Z" level=info msg="Loading containers: start." Sep 9 05:41:56.983621 kernel: Initializing XFRM netlink socket Sep 9 05:41:57.317792 systemd-networkd[1457]: docker0: Link UP Sep 9 05:41:57.323862 dockerd[1908]: time="2025-09-09T05:41:57.323795566Z" level=info msg="Loading containers: done." Sep 9 05:41:57.344896 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3005694368-merged.mount: Deactivated successfully. Sep 9 05:41:57.347538 dockerd[1908]: time="2025-09-09T05:41:57.346210928Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:41:57.347538 dockerd[1908]: time="2025-09-09T05:41:57.346334858Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:41:57.347538 dockerd[1908]: time="2025-09-09T05:41:57.346448149Z" level=info msg="Initializing buildkit" Sep 9 05:41:57.377275 dockerd[1908]: time="2025-09-09T05:41:57.377201370Z" level=info msg="Completed buildkit initialization" Sep 9 05:41:57.386420 dockerd[1908]: time="2025-09-09T05:41:57.386340307Z" level=info msg="Daemon has completed initialization" Sep 9 05:41:57.386679 dockerd[1908]: time="2025-09-09T05:41:57.386538937Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:41:57.386686 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:41:58.239821 containerd[1603]: time="2025-09-09T05:41:58.239719574Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 05:41:58.770669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739485517.mount: Deactivated successfully. Sep 9 05:42:00.507334 containerd[1603]: time="2025-09-09T05:42:00.507244928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:00.509136 containerd[1603]: time="2025-09-09T05:42:00.508898373Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28086259" Sep 9 05:42:00.510535 containerd[1603]: time="2025-09-09T05:42:00.510488312Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:00.515681 containerd[1603]: time="2025-09-09T05:42:00.514306858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:00.515903 containerd[1603]: time="2025-09-09T05:42:00.515818168Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.276000132s" Sep 9 05:42:00.515903 containerd[1603]: time="2025-09-09T05:42:00.515868915Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 05:42:00.516749 containerd[1603]: time="2025-09-09T05:42:00.516714390Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 05:42:01.466082 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 05:42:01.470252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:42:01.845820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:42:01.860309 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:42:01.965212 kubelet[2183]: E0909 05:42:01.964416 2183 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:42:01.969474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:42:01.969750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:42:01.972480 systemd[1]: kubelet.service: Consumed 299ms CPU time, 110.3M memory peak. Sep 9 05:42:02.181544 containerd[1603]: time="2025-09-09T05:42:02.181376765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:02.183651 containerd[1603]: time="2025-09-09T05:42:02.183351587Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24716615" Sep 9 05:42:02.184750 containerd[1603]: time="2025-09-09T05:42:02.184708617Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:02.193710 containerd[1603]: time="2025-09-09T05:42:02.193659226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:02.194749 containerd[1603]: time="2025-09-09T05:42:02.194705522Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.677949486s" Sep 9 05:42:02.194920 containerd[1603]: time="2025-09-09T05:42:02.194895684Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 05:42:02.196173 containerd[1603]: time="2025-09-09T05:42:02.196137410Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 05:42:03.587575 containerd[1603]: time="2025-09-09T05:42:03.587468538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:03.589326 containerd[1603]: time="2025-09-09T05:42:03.589281883Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18784343" Sep 9 05:42:03.591036 containerd[1603]: time="2025-09-09T05:42:03.590967260Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:03.597617 containerd[1603]: time="2025-09-09T05:42:03.596269943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:03.597761 containerd[1603]: time="2025-09-09T05:42:03.597573984Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.4013941s" Sep 9 05:42:03.597875 containerd[1603]: time="2025-09-09T05:42:03.597848962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 05:42:03.598895 containerd[1603]: time="2025-09-09T05:42:03.598852667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 05:42:04.805524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1129653333.mount: Deactivated successfully. Sep 9 05:42:05.503007 containerd[1603]: time="2025-09-09T05:42:05.502918748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:05.504515 containerd[1603]: time="2025-09-09T05:42:05.504460504Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30386150" Sep 9 05:42:05.505217 containerd[1603]: time="2025-09-09T05:42:05.505138829Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:05.508158 containerd[1603]: time="2025-09-09T05:42:05.508078115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:05.509242 containerd[1603]: time="2025-09-09T05:42:05.509171774Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 1.910275502s" Sep 9 05:42:05.509242 containerd[1603]: time="2025-09-09T05:42:05.509225549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 05:42:05.510618 containerd[1603]: time="2025-09-09T05:42:05.510261839Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 05:42:06.022978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855721319.mount: Deactivated successfully. Sep 9 05:42:07.251581 containerd[1603]: time="2025-09-09T05:42:07.251472777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:07.253285 containerd[1603]: time="2025-09-09T05:42:07.253226527Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Sep 9 05:42:07.254628 containerd[1603]: time="2025-09-09T05:42:07.254411067Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:07.260645 containerd[1603]: time="2025-09-09T05:42:07.259890041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:07.261724 containerd[1603]: time="2025-09-09T05:42:07.261390328Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.751081453s" Sep 9 05:42:07.261724 containerd[1603]: time="2025-09-09T05:42:07.261448877Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 05:42:07.262466 containerd[1603]: time="2025-09-09T05:42:07.262413990Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:42:07.723008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224911266.mount: Deactivated successfully. Sep 9 05:42:07.729308 containerd[1603]: time="2025-09-09T05:42:07.729241146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:42:07.730379 containerd[1603]: time="2025-09-09T05:42:07.730048294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 9 05:42:07.731717 containerd[1603]: time="2025-09-09T05:42:07.731673532Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:42:07.734539 containerd[1603]: time="2025-09-09T05:42:07.734498605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:42:07.735555 containerd[1603]: time="2025-09-09T05:42:07.735514641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 473.060689ms" Sep 9 05:42:07.735830 containerd[1603]: time="2025-09-09T05:42:07.735679301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 05:42:07.736387 containerd[1603]: time="2025-09-09T05:42:07.736327753Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 05:42:08.135165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810316316.mount: Deactivated successfully. Sep 9 05:42:08.626960 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 9 05:42:10.483227 containerd[1603]: time="2025-09-09T05:42:10.483131723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:10.484989 containerd[1603]: time="2025-09-09T05:42:10.484923532Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56918218" Sep 9 05:42:10.487619 containerd[1603]: time="2025-09-09T05:42:10.486145626Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:10.489986 containerd[1603]: time="2025-09-09T05:42:10.489940463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:10.491630 containerd[1603]: time="2025-09-09T05:42:10.491568535Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.755199109s" Sep 9 05:42:10.491786 containerd[1603]: time="2025-09-09T05:42:10.491760306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 05:42:12.216537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 05:42:12.221927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:42:12.574875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:42:12.589344 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:42:12.664387 kubelet[2345]: E0909 05:42:12.664298 2345 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:42:12.669265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:42:12.669703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:42:12.670409 systemd[1]: kubelet.service: Consumed 276ms CPU time, 110.3M memory peak. Sep 9 05:42:14.647175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:42:14.647476 systemd[1]: kubelet.service: Consumed 276ms CPU time, 110.3M memory peak. Sep 9 05:42:14.651099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:42:14.695198 systemd[1]: Reload requested from client PID 2360 ('systemctl') (unit session-9.scope)... Sep 9 05:42:14.695230 systemd[1]: Reloading... Sep 9 05:42:14.936642 zram_generator::config[2404]: No configuration found. Sep 9 05:42:15.227859 systemd[1]: Reloading finished in 531 ms. Sep 9 05:42:15.310306 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:42:15.310674 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:42:15.311156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:42:15.311250 systemd[1]: kubelet.service: Consumed 190ms CPU time, 98.3M memory peak. Sep 9 05:42:15.314747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:42:15.664393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:42:15.679381 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:42:15.741980 kubelet[2456]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:42:15.741980 kubelet[2456]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 05:42:15.741980 kubelet[2456]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:42:15.742662 kubelet[2456]: I0909 05:42:15.742081 2456 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:42:16.265252 kubelet[2456]: I0909 05:42:16.265183 2456 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 05:42:16.265528 kubelet[2456]: I0909 05:42:16.265468 2456 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:42:16.267628 kubelet[2456]: I0909 05:42:16.266250 2456 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 05:42:16.298967 kubelet[2456]: E0909 05:42:16.298908 2456 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:42:16.300352 kubelet[2456]: I0909 05:42:16.300312 2456 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:42:16.311409 kubelet[2456]: I0909 05:42:16.311373 2456 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:42:16.318010 kubelet[2456]: I0909 05:42:16.317968 2456 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:42:16.319125 kubelet[2456]: I0909 05:42:16.319071 2456 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 05:42:16.319477 kubelet[2456]: I0909 05:42:16.319412 2456 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:42:16.319784 kubelet[2456]: I0909 05:42:16.319463 2456 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:42:16.320001 kubelet[2456]: I0909 05:42:16.319792 2456 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:42:16.320001 kubelet[2456]: I0909 05:42:16.319813 2456 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 05:42:16.320001 kubelet[2456]: I0909 05:42:16.319989 2456 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:42:16.324847 kubelet[2456]: I0909 05:42:16.324796 2456 kubelet.go:408] "Attempting to sync node with API server" Sep 9 05:42:16.324847 kubelet[2456]: I0909 05:42:16.324842 2456 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:42:16.325004 kubelet[2456]: I0909 05:42:16.324901 2456 kubelet.go:314] "Adding apiserver pod source" Sep 9 05:42:16.325004 kubelet[2456]: I0909 05:42:16.324929 2456 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:42:16.339625 kubelet[2456]: W0909 05:42:16.339483 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5&limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Sep 9 05:42:16.339834 kubelet[2456]: E0909 05:42:16.339643 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5&limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:42:16.339902 kubelet[2456]: I0909 05:42:16.339845 2456 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:42:16.340724 kubelet[2456]: I0909 05:42:16.340691 2456 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:42:16.341838 kubelet[2456]: W0909 05:42:16.341796 2456 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:42:16.345468 kubelet[2456]: W0909 05:42:16.345394 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Sep 9 05:42:16.345722 kubelet[2456]: E0909 05:42:16.345689 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:42:16.348268 kubelet[2456]: I0909 05:42:16.347954 2456 server.go:1274] "Started kubelet" Sep 9 05:42:16.348787 kubelet[2456]: I0909 05:42:16.348744 2456 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:42:16.349456 kubelet[2456]: I0909 05:42:16.349430 2456 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:42:16.351378 kubelet[2456]: I0909 05:42:16.351335 2456 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:42:16.353254 kubelet[2456]: I0909 05:42:16.353206 2456 server.go:449] "Adding debug handlers to kubelet server" Sep 9 05:42:16.360431 kubelet[2456]: I0909 05:42:16.360370 2456 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:42:16.365843 kubelet[2456]: I0909 05:42:16.365795 2456 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:42:16.369108 kubelet[2456]: E0909 05:42:16.366971 2456 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5.186386d6b684ec8c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5,UID:ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5,},FirstTimestamp:2025-09-09 05:42:16.347913356 +0000 UTC m=+0.661833980,LastTimestamp:2025-09-09 05:42:16.347913356 +0000 UTC m=+0.661833980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5,}" Sep 9 05:42:16.370103 kubelet[2456]: I0909 05:42:16.369960 2456 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 05:42:16.370225 kubelet[2456]: I0909 05:42:16.370121 2456 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 05:42:16.370225 kubelet[2456]: I0909 05:42:16.370211 2456 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:42:16.371413 kubelet[2456]: W0909 05:42:16.371071 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Sep 9 05:42:16.371413 kubelet[2456]: E0909 05:42:16.371153 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:42:16.371942 kubelet[2456]: I0909 05:42:16.371913 2456 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:42:16.372077 kubelet[2456]: I0909 05:42:16.372049 2456 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:42:16.373356 kubelet[2456]: E0909 05:42:16.373327 2456 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:42:16.374551 kubelet[2456]: E0909 05:42:16.373925 2456 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" not found" Sep 9 05:42:16.374551 kubelet[2456]: E0909 05:42:16.374049 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5?timeout=10s\": dial tcp 10.128.0.68:6443: connect: connection refused" interval="200ms" Sep 9 05:42:16.374851 kubelet[2456]: I0909 05:42:16.374808 2456 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:42:16.395539 kubelet[2456]: I0909 05:42:16.395308 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:42:16.397640 kubelet[2456]: I0909 05:42:16.397587 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:42:16.397848 kubelet[2456]: I0909 05:42:16.397833 2456 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 05:42:16.397948 kubelet[2456]: I0909 05:42:16.397937 2456 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 05:42:16.398100 kubelet[2456]: E0909 05:42:16.398076 2456 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:42:16.405325 kubelet[2456]: W0909 05:42:16.405242 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Sep 9 05:42:16.405718 kubelet[2456]: E0909 05:42:16.405344 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:42:16.420665 kubelet[2456]: I0909 05:42:16.420631 2456 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 05:42:16.421528 kubelet[2456]: I0909 05:42:16.420979 2456 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 05:42:16.421528 kubelet[2456]: I0909 05:42:16.421127 2456 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:42:16.423415 kubelet[2456]: I0909 05:42:16.423380 2456 policy_none.go:49] "None policy: Start" Sep 9 05:42:16.425390 kubelet[2456]: I0909 05:42:16.425347 2456 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 05:42:16.425390 kubelet[2456]: I0909 05:42:16.425381 2456 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:42:16.437172 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:42:16.449365 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:42:16.455186 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:42:16.473494 kubelet[2456]: I0909 05:42:16.473140 2456 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:42:16.473692 kubelet[2456]: I0909 05:42:16.473512 2456 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:42:16.473692 kubelet[2456]: I0909 05:42:16.473532 2456 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:42:16.474049 kubelet[2456]: I0909 05:42:16.474011 2456 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:42:16.477988 kubelet[2456]: E0909 05:42:16.477955 2456 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" not found" Sep 9 05:42:16.520896 systemd[1]: Created slice kubepods-burstable-pod2ca93e84d736ff259f6c3ecb144d0cae.slice - libcontainer container kubepods-burstable-pod2ca93e84d736ff259f6c3ecb144d0cae.slice. Sep 9 05:42:16.551965 systemd[1]: Created slice kubepods-burstable-pod29ee10439cdcf0e9392ab560d1594673.slice - libcontainer container kubepods-burstable-pod29ee10439cdcf0e9392ab560d1594673.slice. Sep 9 05:42:16.572125 systemd[1]: Created slice kubepods-burstable-podb99fac4be45dcda3cc06675e1d1fb6c7.slice - libcontainer container kubepods-burstable-podb99fac4be45dcda3cc06675e1d1fb6c7.slice. Sep 9 05:42:16.576272 kubelet[2456]: E0909 05:42:16.576208 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5?timeout=10s\": dial tcp 10.128.0.68:6443: connect: connection refused" interval="400ms" Sep 9 05:42:16.579296 kubelet[2456]: I0909 05:42:16.578895 2456 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.579456 kubelet[2456]: E0909 05:42:16.579364 2456 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.68:6443/api/v1/nodes\": dial tcp 10.128.0.68:6443: connect: connection refused" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.672170 kubelet[2456]: I0909 05:42:16.672090 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29ee10439cdcf0e9392ab560d1594673-k8s-certs\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"29ee10439cdcf0e9392ab560d1594673\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.672170 kubelet[2456]: I0909 05:42:16.672164 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b99fac4be45dcda3cc06675e1d1fb6c7-kubeconfig\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"b99fac4be45dcda3cc06675e1d1fb6c7\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.672516 kubelet[2456]: I0909 05:42:16.672197 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b99fac4be45dcda3cc06675e1d1fb6c7-ca-certs\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"b99fac4be45dcda3cc06675e1d1fb6c7\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.672516 kubelet[2456]: I0909 05:42:16.672229 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b99fac4be45dcda3cc06675e1d1fb6c7-flexvolume-dir\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"b99fac4be45dcda3cc06675e1d1fb6c7\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.672516 kubelet[2456]: I0909 05:42:16.672256 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b99fac4be45dcda3cc06675e1d1fb6c7-k8s-certs\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"b99fac4be45dcda3cc06675e1d1fb6c7\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.672516 kubelet[2456]: I0909 05:42:16.672283 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b99fac4be45dcda3cc06675e1d1fb6c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"b99fac4be45dcda3cc06675e1d1fb6c7\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.672743 kubelet[2456]: I0909 05:42:16.672314 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ca93e84d736ff259f6c3ecb144d0cae-kubeconfig\") pod \"kube-scheduler-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"2ca93e84d736ff259f6c3ecb144d0cae\") " pod="kube-system/kube-scheduler-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.672743 kubelet[2456]: I0909 05:42:16.672340 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29ee10439cdcf0e9392ab560d1594673-ca-certs\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"29ee10439cdcf0e9392ab560d1594673\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.672743 kubelet[2456]: I0909 05:42:16.672373 2456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29ee10439cdcf0e9392ab560d1594673-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"29ee10439cdcf0e9392ab560d1594673\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.786728 kubelet[2456]: I0909 05:42:16.786513 2456 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.788162 kubelet[2456]: E0909 05:42:16.788098 2456 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.68:6443/api/v1/nodes\": dial tcp 10.128.0.68:6443: connect: connection refused" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:16.849906 containerd[1603]: time="2025-09-09T05:42:16.849377256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5,Uid:2ca93e84d736ff259f6c3ecb144d0cae,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:16.857403 containerd[1603]: time="2025-09-09T05:42:16.857338037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5,Uid:29ee10439cdcf0e9392ab560d1594673,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:16.878387 containerd[1603]: time="2025-09-09T05:42:16.878286212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5,Uid:b99fac4be45dcda3cc06675e1d1fb6c7,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:16.906905 containerd[1603]: time="2025-09-09T05:42:16.906717868Z" level=info msg="connecting to shim 6267ec42d4db561d073ddef1776268ede77db551b9ff51bfaac010a3c40a35d6" address="unix:///run/containerd/s/d855533b7572464bd1830c983bda4f6c4d5e8c5a976b7859b59695b95f0fcdfc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:16.931421 containerd[1603]: time="2025-09-09T05:42:16.931348882Z" level=info msg="connecting to shim 39a6c456f6215e2243b213a126d2b1fe67fdc7b6ea50c4ff9957922596c55df7" address="unix:///run/containerd/s/7d77f4a2f55fdf8a7fed34f46fab20ae2cd22f76ca7339c3eb29c11df0a6fff1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:16.977391 kubelet[2456]: E0909 05:42:16.977319 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5?timeout=10s\": dial tcp 10.128.0.68:6443: connect: connection refused" interval="800ms" Sep 9 05:42:16.982810 containerd[1603]: time="2025-09-09T05:42:16.982730749Z" level=info msg="connecting to shim f16c8f4e33b561301cb18062406a910507f1884dde68324dc3f2c540ae77bc54" address="unix:///run/containerd/s/77f26ba36f6002195fd691f0c9aedcd592e7d143e95e1555ddb5818655768cdc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:16.992881 systemd[1]: Started cri-containerd-6267ec42d4db561d073ddef1776268ede77db551b9ff51bfaac010a3c40a35d6.scope - libcontainer container 6267ec42d4db561d073ddef1776268ede77db551b9ff51bfaac010a3c40a35d6. Sep 9 05:42:17.053407 systemd[1]: Started cri-containerd-39a6c456f6215e2243b213a126d2b1fe67fdc7b6ea50c4ff9957922596c55df7.scope - libcontainer container 39a6c456f6215e2243b213a126d2b1fe67fdc7b6ea50c4ff9957922596c55df7. Sep 9 05:42:17.074846 systemd[1]: Started cri-containerd-f16c8f4e33b561301cb18062406a910507f1884dde68324dc3f2c540ae77bc54.scope - libcontainer container f16c8f4e33b561301cb18062406a910507f1884dde68324dc3f2c540ae77bc54. Sep 9 05:42:17.160659 containerd[1603]: time="2025-09-09T05:42:17.160572463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5,Uid:2ca93e84d736ff259f6c3ecb144d0cae,Namespace:kube-system,Attempt:0,} returns sandbox id \"6267ec42d4db561d073ddef1776268ede77db551b9ff51bfaac010a3c40a35d6\"" Sep 9 05:42:17.164409 kubelet[2456]: E0909 05:42:17.164311 2456 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819" Sep 9 05:42:17.168201 containerd[1603]: time="2025-09-09T05:42:17.168149723Z" level=info msg="CreateContainer within sandbox \"6267ec42d4db561d073ddef1776268ede77db551b9ff51bfaac010a3c40a35d6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:42:17.185993 containerd[1603]: time="2025-09-09T05:42:17.185941465Z" level=info msg="Container 5132de66b05f8d401415fb1305b244b3bcffa983a65349d41f86d448b1eb4b23: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:17.195323 kubelet[2456]: I0909 05:42:17.195280 2456 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:17.196400 kubelet[2456]: E0909 05:42:17.196354 2456 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.68:6443/api/v1/nodes\": dial tcp 10.128.0.68:6443: connect: connection refused" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:17.206779 containerd[1603]: time="2025-09-09T05:42:17.206627996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5,Uid:29ee10439cdcf0e9392ab560d1594673,Namespace:kube-system,Attempt:0,} returns sandbox id \"39a6c456f6215e2243b213a126d2b1fe67fdc7b6ea50c4ff9957922596c55df7\"" Sep 9 05:42:17.208475 containerd[1603]: time="2025-09-09T05:42:17.208197137Z" level=info msg="CreateContainer within sandbox \"6267ec42d4db561d073ddef1776268ede77db551b9ff51bfaac010a3c40a35d6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5132de66b05f8d401415fb1305b244b3bcffa983a65349d41f86d448b1eb4b23\"" Sep 9 05:42:17.209878 containerd[1603]: time="2025-09-09T05:42:17.209845041Z" level=info msg="StartContainer for \"5132de66b05f8d401415fb1305b244b3bcffa983a65349d41f86d448b1eb4b23\"" Sep 9 05:42:17.212297 containerd[1603]: time="2025-09-09T05:42:17.212257707Z" level=info msg="connecting to shim 5132de66b05f8d401415fb1305b244b3bcffa983a65349d41f86d448b1eb4b23" address="unix:///run/containerd/s/d855533b7572464bd1830c983bda4f6c4d5e8c5a976b7859b59695b95f0fcdfc" protocol=ttrpc version=3 Sep 9 05:42:17.213261 kubelet[2456]: E0909 05:42:17.213216 2456 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819" Sep 9 05:42:17.217026 containerd[1603]: time="2025-09-09T05:42:17.216984544Z" level=info msg="CreateContainer within sandbox \"39a6c456f6215e2243b213a126d2b1fe67fdc7b6ea50c4ff9957922596c55df7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:42:17.231037 containerd[1603]: time="2025-09-09T05:42:17.230926024Z" level=info msg="Container 73baa7d8a205a714208e582434e11770467538e49d840e1d66c1fb75225bcbc2: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:17.243770 containerd[1603]: time="2025-09-09T05:42:17.243694940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5,Uid:b99fac4be45dcda3cc06675e1d1fb6c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f16c8f4e33b561301cb18062406a910507f1884dde68324dc3f2c540ae77bc54\"" Sep 9 05:42:17.248079 kubelet[2456]: E0909 05:42:17.247966 2456 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad" Sep 9 05:42:17.248868 systemd[1]: Started cri-containerd-5132de66b05f8d401415fb1305b244b3bcffa983a65349d41f86d448b1eb4b23.scope - libcontainer container 5132de66b05f8d401415fb1305b244b3bcffa983a65349d41f86d448b1eb4b23. Sep 9 05:42:17.250240 containerd[1603]: time="2025-09-09T05:42:17.250195703Z" level=info msg="CreateContainer within sandbox \"f16c8f4e33b561301cb18062406a910507f1884dde68324dc3f2c540ae77bc54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:42:17.253133 containerd[1603]: time="2025-09-09T05:42:17.253073099Z" level=info msg="CreateContainer within sandbox \"39a6c456f6215e2243b213a126d2b1fe67fdc7b6ea50c4ff9957922596c55df7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"73baa7d8a205a714208e582434e11770467538e49d840e1d66c1fb75225bcbc2\"" Sep 9 05:42:17.254274 containerd[1603]: time="2025-09-09T05:42:17.254245652Z" level=info msg="StartContainer for \"73baa7d8a205a714208e582434e11770467538e49d840e1d66c1fb75225bcbc2\"" Sep 9 05:42:17.258829 containerd[1603]: time="2025-09-09T05:42:17.258785446Z" level=info msg="connecting to shim 73baa7d8a205a714208e582434e11770467538e49d840e1d66c1fb75225bcbc2" address="unix:///run/containerd/s/7d77f4a2f55fdf8a7fed34f46fab20ae2cd22f76ca7339c3eb29c11df0a6fff1" protocol=ttrpc version=3 Sep 9 05:42:17.270621 containerd[1603]: time="2025-09-09T05:42:17.269812734Z" level=info msg="Container dad509772a250a2f0ca24b82567ce0a49879c2ec195cbc09df22c829508f4fef: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:17.285297 containerd[1603]: time="2025-09-09T05:42:17.285238080Z" level=info msg="CreateContainer within sandbox \"f16c8f4e33b561301cb18062406a910507f1884dde68324dc3f2c540ae77bc54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dad509772a250a2f0ca24b82567ce0a49879c2ec195cbc09df22c829508f4fef\"" Sep 9 05:42:17.287054 containerd[1603]: time="2025-09-09T05:42:17.287000229Z" level=info msg="StartContainer for \"dad509772a250a2f0ca24b82567ce0a49879c2ec195cbc09df22c829508f4fef\"" Sep 9 05:42:17.295106 containerd[1603]: time="2025-09-09T05:42:17.295055989Z" level=info msg="connecting to shim dad509772a250a2f0ca24b82567ce0a49879c2ec195cbc09df22c829508f4fef" address="unix:///run/containerd/s/77f26ba36f6002195fd691f0c9aedcd592e7d143e95e1555ddb5818655768cdc" protocol=ttrpc version=3 Sep 9 05:42:17.310814 systemd[1]: Started cri-containerd-73baa7d8a205a714208e582434e11770467538e49d840e1d66c1fb75225bcbc2.scope - libcontainer container 73baa7d8a205a714208e582434e11770467538e49d840e1d66c1fb75225bcbc2. Sep 9 05:42:17.322286 kubelet[2456]: W0909 05:42:17.319324 2456 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Sep 9 05:42:17.322286 kubelet[2456]: E0909 05:42:17.322205 2456 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:42:17.347841 systemd[1]: Started cri-containerd-dad509772a250a2f0ca24b82567ce0a49879c2ec195cbc09df22c829508f4fef.scope - libcontainer container dad509772a250a2f0ca24b82567ce0a49879c2ec195cbc09df22c829508f4fef. Sep 9 05:42:17.413381 containerd[1603]: time="2025-09-09T05:42:17.412899181Z" level=info msg="StartContainer for \"5132de66b05f8d401415fb1305b244b3bcffa983a65349d41f86d448b1eb4b23\" returns successfully" Sep 9 05:42:17.470248 containerd[1603]: time="2025-09-09T05:42:17.470173623Z" level=info msg="StartContainer for \"73baa7d8a205a714208e582434e11770467538e49d840e1d66c1fb75225bcbc2\" returns successfully" Sep 9 05:42:17.545174 containerd[1603]: time="2025-09-09T05:42:17.545102325Z" level=info msg="StartContainer for \"dad509772a250a2f0ca24b82567ce0a49879c2ec195cbc09df22c829508f4fef\" returns successfully" Sep 9 05:42:18.003020 kubelet[2456]: I0909 05:42:18.002247 2456 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:21.149164 kubelet[2456]: E0909 05:42:21.149079 2456 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" not found" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:21.263818 kubelet[2456]: I0909 05:42:21.263761 2456 kubelet_node_status.go:75] "Successfully registered node" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:21.264041 kubelet[2456]: E0909 05:42:21.263845 2456 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\": node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" not found" Sep 9 05:42:21.347907 kubelet[2456]: I0909 05:42:21.347844 2456 apiserver.go:52] "Watching apiserver" Sep 9 05:42:21.371868 kubelet[2456]: I0909 05:42:21.371806 2456 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 05:42:22.634874 update_engine[1536]: I20250909 05:42:22.634769 1536 update_attempter.cc:509] Updating boot flags... Sep 9 05:42:23.188086 systemd[1]: Reload requested from client PID 2744 ('systemctl') (unit session-9.scope)... Sep 9 05:42:23.188118 systemd[1]: Reloading... Sep 9 05:42:23.461664 zram_generator::config[2790]: No configuration found. Sep 9 05:42:23.828160 systemd[1]: Reloading finished in 639 ms. Sep 9 05:42:23.923842 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:42:23.948563 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:42:23.949604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:42:23.949702 systemd[1]: kubelet.service: Consumed 1.331s CPU time, 129.3M memory peak. Sep 9 05:42:23.956437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:42:24.342440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:42:24.354157 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:42:24.426247 kubelet[2838]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:42:24.426247 kubelet[2838]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 05:42:24.426247 kubelet[2838]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:42:24.426841 kubelet[2838]: I0909 05:42:24.426328 2838 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:42:24.443737 kubelet[2838]: I0909 05:42:24.443648 2838 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 05:42:24.443737 kubelet[2838]: I0909 05:42:24.443685 2838 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:42:24.444782 kubelet[2838]: I0909 05:42:24.444258 2838 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 05:42:24.446416 kubelet[2838]: I0909 05:42:24.446389 2838 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 05:42:24.453780 kubelet[2838]: I0909 05:42:24.453665 2838 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:42:24.472073 kubelet[2838]: I0909 05:42:24.471061 2838 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:42:24.477859 kubelet[2838]: I0909 05:42:24.477823 2838 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:42:24.478635 kubelet[2838]: I0909 05:42:24.478025 2838 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 05:42:24.478635 kubelet[2838]: I0909 05:42:24.478274 2838 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:42:24.478635 kubelet[2838]: I0909 05:42:24.478314 2838 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:42:24.478635 kubelet[2838]: I0909 05:42:24.478641 2838 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:42:24.478955 kubelet[2838]: I0909 05:42:24.478662 2838 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 05:42:24.478955 kubelet[2838]: I0909 05:42:24.478706 2838 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:42:24.478955 kubelet[2838]: I0909 05:42:24.478862 2838 kubelet.go:408] "Attempting to sync node with API server" Sep 9 05:42:24.478955 kubelet[2838]: I0909 05:42:24.478882 2838 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:42:24.479799 kubelet[2838]: I0909 05:42:24.479745 2838 kubelet.go:314] "Adding apiserver pod source" Sep 9 05:42:24.479799 kubelet[2838]: I0909 05:42:24.479790 2838 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:42:24.482362 kubelet[2838]: I0909 05:42:24.482324 2838 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:42:24.491625 kubelet[2838]: I0909 05:42:24.491032 2838 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:42:24.493194 sudo[2852]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 05:42:24.494230 sudo[2852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 05:42:24.499328 kubelet[2838]: I0909 05:42:24.499298 2838 server.go:1274] "Started kubelet" Sep 9 05:42:24.523659 kubelet[2838]: I0909 05:42:24.522937 2838 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:42:24.533254 kubelet[2838]: I0909 05:42:24.532844 2838 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:42:24.533254 kubelet[2838]: I0909 05:42:24.532972 2838 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:42:24.535467 kubelet[2838]: I0909 05:42:24.534523 2838 server.go:449] "Adding debug handlers to kubelet server" Sep 9 05:42:24.537024 kubelet[2838]: I0909 05:42:24.536124 2838 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:42:24.537024 kubelet[2838]: I0909 05:42:24.536859 2838 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:42:24.543655 kubelet[2838]: I0909 05:42:24.543575 2838 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 05:42:24.545306 kubelet[2838]: I0909 05:42:24.545175 2838 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 05:42:24.545641 kubelet[2838]: I0909 05:42:24.545364 2838 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:42:24.554647 kubelet[2838]: I0909 05:42:24.551339 2838 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:42:24.554647 kubelet[2838]: I0909 05:42:24.551443 2838 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:42:24.557404 kubelet[2838]: E0909 05:42:24.557297 2838 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:42:24.561967 kubelet[2838]: I0909 05:42:24.561942 2838 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:42:24.564300 kubelet[2838]: I0909 05:42:24.564257 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:42:24.570037 kubelet[2838]: I0909 05:42:24.570006 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:42:24.570203 kubelet[2838]: I0909 05:42:24.570045 2838 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 05:42:24.570203 kubelet[2838]: I0909 05:42:24.570071 2838 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 05:42:24.570203 kubelet[2838]: E0909 05:42:24.570129 2838 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:42:24.656383 kubelet[2838]: I0909 05:42:24.656060 2838 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 05:42:24.656383 kubelet[2838]: I0909 05:42:24.656087 2838 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 05:42:24.656383 kubelet[2838]: I0909 05:42:24.656113 2838 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:42:24.656383 kubelet[2838]: I0909 05:42:24.656323 2838 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:42:24.656383 kubelet[2838]: I0909 05:42:24.656342 2838 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:42:24.656383 kubelet[2838]: I0909 05:42:24.656371 2838 policy_none.go:49] "None policy: Start" Sep 9 05:42:24.659849 kubelet[2838]: I0909 05:42:24.659804 2838 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 05:42:24.659849 kubelet[2838]: I0909 05:42:24.659836 2838 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:42:24.660642 kubelet[2838]: I0909 05:42:24.660046 2838 state_mem.go:75] "Updated machine memory state" Sep 9 05:42:24.668510 kubelet[2838]: I0909 05:42:24.668480 2838 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:42:24.669956 kubelet[2838]: I0909 05:42:24.669935 2838 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:42:24.670048 kubelet[2838]: I0909 05:42:24.669958 2838 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:42:24.670570 kubelet[2838]: I0909 05:42:24.670538 2838 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:42:24.703551 kubelet[2838]: W0909 05:42:24.703511 2838 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 9 05:42:24.706066 kubelet[2838]: W0909 05:42:24.705529 2838 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 9 05:42:24.708068 kubelet[2838]: W0909 05:42:24.707971 2838 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 9 05:42:24.793196 kubelet[2838]: I0909 05:42:24.793137 2838 kubelet_node_status.go:72] "Attempting to register node" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:24.810852 kubelet[2838]: I0909 05:42:24.810813 2838 kubelet_node_status.go:111] "Node was previously registered" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:24.811110 kubelet[2838]: I0909 05:42:24.811057 2838 kubelet_node_status.go:75] "Successfully registered node" node="ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:24.846755 kubelet[2838]: I0909 05:42:24.846710 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b99fac4be45dcda3cc06675e1d1fb6c7-ca-certs\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"b99fac4be45dcda3cc06675e1d1fb6c7\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:24.846922 kubelet[2838]: I0909 05:42:24.846768 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b99fac4be45dcda3cc06675e1d1fb6c7-flexvolume-dir\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"b99fac4be45dcda3cc06675e1d1fb6c7\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:24.846922 kubelet[2838]: I0909 05:42:24.846797 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b99fac4be45dcda3cc06675e1d1fb6c7-k8s-certs\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"b99fac4be45dcda3cc06675e1d1fb6c7\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:24.846922 kubelet[2838]: I0909 05:42:24.846828 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b99fac4be45dcda3cc06675e1d1fb6c7-kubeconfig\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"b99fac4be45dcda3cc06675e1d1fb6c7\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:24.846922 kubelet[2838]: I0909 05:42:24.846860 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b99fac4be45dcda3cc06675e1d1fb6c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"b99fac4be45dcda3cc06675e1d1fb6c7\") " pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:24.847149 kubelet[2838]: I0909 05:42:24.846902 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ca93e84d736ff259f6c3ecb144d0cae-kubeconfig\") pod \"kube-scheduler-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"2ca93e84d736ff259f6c3ecb144d0cae\") " pod="kube-system/kube-scheduler-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:24.847149 kubelet[2838]: I0909 05:42:24.846972 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29ee10439cdcf0e9392ab560d1594673-ca-certs\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"29ee10439cdcf0e9392ab560d1594673\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:24.847149 kubelet[2838]: I0909 05:42:24.847004 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29ee10439cdcf0e9392ab560d1594673-k8s-certs\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"29ee10439cdcf0e9392ab560d1594673\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:24.847149 kubelet[2838]: I0909 05:42:24.847037 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29ee10439cdcf0e9392ab560d1594673-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" (UID: \"29ee10439cdcf0e9392ab560d1594673\") " pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" Sep 9 05:42:25.173710 sudo[2852]: pam_unix(sudo:session): session closed for user root Sep 9 05:42:25.481772 kubelet[2838]: I0909 05:42:25.481339 2838 apiserver.go:52] "Watching apiserver" Sep 9 05:42:25.546491 kubelet[2838]: I0909 05:42:25.546395 2838 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 05:42:25.697926 kubelet[2838]: I0909 05:42:25.697375 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" podStartSLOduration=1.697346101 podStartE2EDuration="1.697346101s" podCreationTimestamp="2025-09-09 05:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:42:25.660703194 +0000 UTC m=+1.298692908" watchObservedRunningTime="2025-09-09 05:42:25.697346101 +0000 UTC m=+1.335335817" Sep 9 05:42:25.714891 kubelet[2838]: I0909 05:42:25.714416 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" podStartSLOduration=1.714387771 podStartE2EDuration="1.714387771s" podCreationTimestamp="2025-09-09 05:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:42:25.698975605 +0000 UTC m=+1.336965320" watchObservedRunningTime="2025-09-09 05:42:25.714387771 +0000 UTC m=+1.352377484" Sep 9 05:42:27.127936 sudo[1890]: pam_unix(sudo:session): session closed for user root Sep 9 05:42:27.170972 sshd[1889]: Connection closed by 139.178.89.65 port 38190 Sep 9 05:42:27.171845 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Sep 9 05:42:27.177876 systemd[1]: sshd@8-10.128.0.68:22-139.178.89.65:38190.service: Deactivated successfully. Sep 9 05:42:27.181412 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:42:27.181719 systemd[1]: session-9.scope: Consumed 7.012s CPU time, 266.6M memory peak. Sep 9 05:42:27.183888 systemd-logind[1533]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:42:27.186383 systemd-logind[1533]: Removed session 9. Sep 9 05:42:29.454429 kubelet[2838]: I0909 05:42:29.454377 2838 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:42:29.455369 containerd[1603]: time="2025-09-09T05:42:29.454935758Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:42:29.455841 kubelet[2838]: I0909 05:42:29.455438 2838 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:42:30.297022 kubelet[2838]: I0909 05:42:30.296926 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5" podStartSLOduration=6.296888319 podStartE2EDuration="6.296888319s" podCreationTimestamp="2025-09-09 05:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:42:25.71758329 +0000 UTC m=+1.355573015" watchObservedRunningTime="2025-09-09 05:42:30.296888319 +0000 UTC m=+5.934878032" Sep 9 05:42:30.313321 systemd[1]: Created slice kubepods-besteffort-pod09d1269e_f772_45a6_ac12_ce2608540bbb.slice - libcontainer container kubepods-besteffort-pod09d1269e_f772_45a6_ac12_ce2608540bbb.slice. Sep 9 05:42:30.339101 systemd[1]: Created slice kubepods-burstable-podf77fb038_4f2b_4b28_8204_d4bce19f956c.slice - libcontainer container kubepods-burstable-podf77fb038_4f2b_4b28_8204_d4bce19f956c.slice. Sep 9 05:42:30.387625 kubelet[2838]: I0909 05:42:30.387224 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-config-path\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.387625 kubelet[2838]: I0909 05:42:30.387297 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cni-path\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.387625 kubelet[2838]: I0909 05:42:30.387330 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-etc-cni-netd\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.387625 kubelet[2838]: I0909 05:42:30.387362 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/09d1269e-f772-45a6-ac12-ce2608540bbb-kube-proxy\") pod \"kube-proxy-kw7fv\" (UID: \"09d1269e-f772-45a6-ac12-ce2608540bbb\") " pod="kube-system/kube-proxy-kw7fv" Sep 9 05:42:30.387625 kubelet[2838]: I0909 05:42:30.387390 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09d1269e-f772-45a6-ac12-ce2608540bbb-xtables-lock\") pod \"kube-proxy-kw7fv\" (UID: \"09d1269e-f772-45a6-ac12-ce2608540bbb\") " pod="kube-system/kube-proxy-kw7fv" Sep 9 05:42:30.387625 kubelet[2838]: I0909 05:42:30.387418 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09d1269e-f772-45a6-ac12-ce2608540bbb-lib-modules\") pod \"kube-proxy-kw7fv\" (UID: \"09d1269e-f772-45a6-ac12-ce2608540bbb\") " pod="kube-system/kube-proxy-kw7fv" Sep 9 05:42:30.388116 kubelet[2838]: I0909 05:42:30.387446 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkxp2\" (UniqueName: \"kubernetes.io/projected/09d1269e-f772-45a6-ac12-ce2608540bbb-kube-api-access-nkxp2\") pod \"kube-proxy-kw7fv\" (UID: \"09d1269e-f772-45a6-ac12-ce2608540bbb\") " pod="kube-system/kube-proxy-kw7fv" Sep 9 05:42:30.388116 kubelet[2838]: I0909 05:42:30.387491 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-host-proc-sys-net\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.388116 kubelet[2838]: I0909 05:42:30.387519 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-host-proc-sys-kernel\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.388116 kubelet[2838]: I0909 05:42:30.387548 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd57j\" (UniqueName: \"kubernetes.io/projected/f77fb038-4f2b-4b28-8204-d4bce19f956c-kube-api-access-vd57j\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.388116 kubelet[2838]: I0909 05:42:30.387574 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-cgroup\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.388853 kubelet[2838]: I0909 05:42:30.388793 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-hostproc\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.389345 kubelet[2838]: I0909 05:42:30.389049 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-lib-modules\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.389345 kubelet[2838]: I0909 05:42:30.389111 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f77fb038-4f2b-4b28-8204-d4bce19f956c-clustermesh-secrets\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.389345 kubelet[2838]: I0909 05:42:30.389145 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-xtables-lock\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.389345 kubelet[2838]: I0909 05:42:30.389173 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-run\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.389345 kubelet[2838]: I0909 05:42:30.389201 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-bpf-maps\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.389345 kubelet[2838]: I0909 05:42:30.389248 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f77fb038-4f2b-4b28-8204-d4bce19f956c-hubble-tls\") pod \"cilium-jlcz6\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " pod="kube-system/cilium-jlcz6" Sep 9 05:42:30.553550 systemd[1]: Created slice kubepods-besteffort-podf60740ac_0181_4dbd_a5af_01ea43447a12.slice - libcontainer container kubepods-besteffort-podf60740ac_0181_4dbd_a5af_01ea43447a12.slice. Sep 9 05:42:30.596625 kubelet[2838]: I0909 05:42:30.595808 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f60740ac-0181-4dbd-a5af-01ea43447a12-cilium-config-path\") pod \"cilium-operator-5d85765b45-wf2n7\" (UID: \"f60740ac-0181-4dbd-a5af-01ea43447a12\") " pod="kube-system/cilium-operator-5d85765b45-wf2n7" Sep 9 05:42:30.596625 kubelet[2838]: I0909 05:42:30.595906 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgbvn\" (UniqueName: \"kubernetes.io/projected/f60740ac-0181-4dbd-a5af-01ea43447a12-kube-api-access-dgbvn\") pod \"cilium-operator-5d85765b45-wf2n7\" (UID: \"f60740ac-0181-4dbd-a5af-01ea43447a12\") " pod="kube-system/cilium-operator-5d85765b45-wf2n7" Sep 9 05:42:30.627993 containerd[1603]: time="2025-09-09T05:42:30.627920551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kw7fv,Uid:09d1269e-f772-45a6-ac12-ce2608540bbb,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:30.650508 containerd[1603]: time="2025-09-09T05:42:30.650441432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jlcz6,Uid:f77fb038-4f2b-4b28-8204-d4bce19f956c,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:30.669291 containerd[1603]: time="2025-09-09T05:42:30.669224278Z" level=info msg="connecting to shim f2762a3e60cb7544fb6a25d90ac2a8801e816c03a4b1cce98e7e9d6e9d786fc8" address="unix:///run/containerd/s/b782206281c2405dda374a29ba3bdea8b9655a5b373b2472d4a6a5889d6a8aeb" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:30.696810 containerd[1603]: time="2025-09-09T05:42:30.696485291Z" level=info msg="connecting to shim fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024" address="unix:///run/containerd/s/3f05c9e0dcb92fe6880e68968215ce27fefc09779fb3936d4f80a6f08ae57ab4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:30.738956 systemd[1]: Started cri-containerd-f2762a3e60cb7544fb6a25d90ac2a8801e816c03a4b1cce98e7e9d6e9d786fc8.scope - libcontainer container f2762a3e60cb7544fb6a25d90ac2a8801e816c03a4b1cce98e7e9d6e9d786fc8. Sep 9 05:42:30.770938 systemd[1]: Started cri-containerd-fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024.scope - libcontainer container fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024. Sep 9 05:42:30.830442 containerd[1603]: time="2025-09-09T05:42:30.830178131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kw7fv,Uid:09d1269e-f772-45a6-ac12-ce2608540bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2762a3e60cb7544fb6a25d90ac2a8801e816c03a4b1cce98e7e9d6e9d786fc8\"" Sep 9 05:42:30.838382 containerd[1603]: time="2025-09-09T05:42:30.837982113Z" level=info msg="CreateContainer within sandbox \"f2762a3e60cb7544fb6a25d90ac2a8801e816c03a4b1cce98e7e9d6e9d786fc8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:42:30.855451 containerd[1603]: time="2025-09-09T05:42:30.855381378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jlcz6,Uid:f77fb038-4f2b-4b28-8204-d4bce19f956c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\"" Sep 9 05:42:30.859825 containerd[1603]: time="2025-09-09T05:42:30.859771681Z" level=info msg="Container 6190e616a3581d55d2ff895b92e705498425a697f08b04a4e4c603b866823b8b: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:30.861431 containerd[1603]: time="2025-09-09T05:42:30.860354577Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 05:42:30.862935 containerd[1603]: time="2025-09-09T05:42:30.862813472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wf2n7,Uid:f60740ac-0181-4dbd-a5af-01ea43447a12,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:30.889289 containerd[1603]: time="2025-09-09T05:42:30.889101314Z" level=info msg="CreateContainer within sandbox \"f2762a3e60cb7544fb6a25d90ac2a8801e816c03a4b1cce98e7e9d6e9d786fc8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6190e616a3581d55d2ff895b92e705498425a697f08b04a4e4c603b866823b8b\"" Sep 9 05:42:30.894012 containerd[1603]: time="2025-09-09T05:42:30.892959703Z" level=info msg="StartContainer for \"6190e616a3581d55d2ff895b92e705498425a697f08b04a4e4c603b866823b8b\"" Sep 9 05:42:30.898119 containerd[1603]: time="2025-09-09T05:42:30.898036905Z" level=info msg="connecting to shim 6190e616a3581d55d2ff895b92e705498425a697f08b04a4e4c603b866823b8b" address="unix:///run/containerd/s/b782206281c2405dda374a29ba3bdea8b9655a5b373b2472d4a6a5889d6a8aeb" protocol=ttrpc version=3 Sep 9 05:42:30.926699 containerd[1603]: time="2025-09-09T05:42:30.926628575Z" level=info msg="connecting to shim d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398" address="unix:///run/containerd/s/17e3c8a8ae32edddcddf06c813ccee898e0e53090e2befbe4443dfbe4c0f1af9" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:30.960022 systemd[1]: Started cri-containerd-6190e616a3581d55d2ff895b92e705498425a697f08b04a4e4c603b866823b8b.scope - libcontainer container 6190e616a3581d55d2ff895b92e705498425a697f08b04a4e4c603b866823b8b. Sep 9 05:42:30.976866 systemd[1]: Started cri-containerd-d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398.scope - libcontainer container d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398. Sep 9 05:42:31.063930 containerd[1603]: time="2025-09-09T05:42:31.063674326Z" level=info msg="StartContainer for \"6190e616a3581d55d2ff895b92e705498425a697f08b04a4e4c603b866823b8b\" returns successfully" Sep 9 05:42:31.079877 containerd[1603]: time="2025-09-09T05:42:31.079798051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wf2n7,Uid:f60740ac-0181-4dbd-a5af-01ea43447a12,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398\"" Sep 9 05:42:31.674809 kubelet[2838]: I0909 05:42:31.674716 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kw7fv" podStartSLOduration=1.6746820489999998 podStartE2EDuration="1.674682049s" podCreationTimestamp="2025-09-09 05:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:42:31.658875044 +0000 UTC m=+7.296864756" watchObservedRunningTime="2025-09-09 05:42:31.674682049 +0000 UTC m=+7.312671763" Sep 9 05:42:35.950991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950852623.mount: Deactivated successfully. Sep 9 05:42:38.965100 containerd[1603]: time="2025-09-09T05:42:38.965010255Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:38.966682 containerd[1603]: time="2025-09-09T05:42:38.966418978Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 05:42:38.967819 containerd[1603]: time="2025-09-09T05:42:38.967774935Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:38.970305 containerd[1603]: time="2025-09-09T05:42:38.970255995Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.109854941s" Sep 9 05:42:38.970423 containerd[1603]: time="2025-09-09T05:42:38.970330134Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 05:42:38.972851 containerd[1603]: time="2025-09-09T05:42:38.972790899Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 05:42:38.973986 containerd[1603]: time="2025-09-09T05:42:38.973936355Z" level=info msg="CreateContainer within sandbox \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:42:38.991891 containerd[1603]: time="2025-09-09T05:42:38.991817757Z" level=info msg="Container ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:39.004648 containerd[1603]: time="2025-09-09T05:42:39.004552868Z" level=info msg="CreateContainer within sandbox \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\"" Sep 9 05:42:39.005473 containerd[1603]: time="2025-09-09T05:42:39.005436713Z" level=info msg="StartContainer for \"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\"" Sep 9 05:42:39.007253 containerd[1603]: time="2025-09-09T05:42:39.007156791Z" level=info msg="connecting to shim ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624" address="unix:///run/containerd/s/3f05c9e0dcb92fe6880e68968215ce27fefc09779fb3936d4f80a6f08ae57ab4" protocol=ttrpc version=3 Sep 9 05:42:39.044947 systemd[1]: Started cri-containerd-ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624.scope - libcontainer container ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624. Sep 9 05:42:39.099448 containerd[1603]: time="2025-09-09T05:42:39.099370759Z" level=info msg="StartContainer for \"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\" returns successfully" Sep 9 05:42:39.118776 systemd[1]: cri-containerd-ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624.scope: Deactivated successfully. Sep 9 05:42:39.124070 containerd[1603]: time="2025-09-09T05:42:39.123865929Z" level=info msg="received exit event container_id:\"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\" id:\"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\" pid:3250 exited_at:{seconds:1757396559 nanos:122792610}" Sep 9 05:42:39.124070 containerd[1603]: time="2025-09-09T05:42:39.123979511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\" id:\"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\" pid:3250 exited_at:{seconds:1757396559 nanos:122792610}" Sep 9 05:42:39.170172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624-rootfs.mount: Deactivated successfully. Sep 9 05:42:41.682676 containerd[1603]: time="2025-09-09T05:42:41.682458172Z" level=info msg="CreateContainer within sandbox \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:42:41.698475 containerd[1603]: time="2025-09-09T05:42:41.698433295Z" level=info msg="Container a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:41.710686 containerd[1603]: time="2025-09-09T05:42:41.710615655Z" level=info msg="CreateContainer within sandbox \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\"" Sep 9 05:42:41.711809 containerd[1603]: time="2025-09-09T05:42:41.711747766Z" level=info msg="StartContainer for \"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\"" Sep 9 05:42:41.713543 containerd[1603]: time="2025-09-09T05:42:41.713495973Z" level=info msg="connecting to shim a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7" address="unix:///run/containerd/s/3f05c9e0dcb92fe6880e68968215ce27fefc09779fb3936d4f80a6f08ae57ab4" protocol=ttrpc version=3 Sep 9 05:42:41.755916 systemd[1]: Started cri-containerd-a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7.scope - libcontainer container a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7. Sep 9 05:42:41.826998 containerd[1603]: time="2025-09-09T05:42:41.826856423Z" level=info msg="StartContainer for \"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\" returns successfully" Sep 9 05:42:41.862134 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:42:41.863046 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:42:41.866376 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:42:41.872099 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:42:41.877858 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:42:41.878671 systemd[1]: cri-containerd-a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7.scope: Deactivated successfully. Sep 9 05:42:41.897037 containerd[1603]: time="2025-09-09T05:42:41.896827854Z" level=info msg="received exit event container_id:\"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\" id:\"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\" pid:3294 exited_at:{seconds:1757396561 nanos:893370022}" Sep 9 05:42:41.904465 containerd[1603]: time="2025-09-09T05:42:41.903787576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\" id:\"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\" pid:3294 exited_at:{seconds:1757396561 nanos:893370022}" Sep 9 05:42:41.934818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:42:42.688081 containerd[1603]: time="2025-09-09T05:42:42.688017689Z" level=info msg="CreateContainer within sandbox \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:42:42.701977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215804581.mount: Deactivated successfully. Sep 9 05:42:42.702165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7-rootfs.mount: Deactivated successfully. Sep 9 05:42:42.710336 containerd[1603]: time="2025-09-09T05:42:42.707776847Z" level=info msg="Container 9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:42.720258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3329595957.mount: Deactivated successfully. Sep 9 05:42:42.738189 containerd[1603]: time="2025-09-09T05:42:42.738113954Z" level=info msg="CreateContainer within sandbox \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\"" Sep 9 05:42:42.739625 containerd[1603]: time="2025-09-09T05:42:42.739569677Z" level=info msg="StartContainer for \"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\"" Sep 9 05:42:42.747827 containerd[1603]: time="2025-09-09T05:42:42.747761941Z" level=info msg="connecting to shim 9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade" address="unix:///run/containerd/s/3f05c9e0dcb92fe6880e68968215ce27fefc09779fb3936d4f80a6f08ae57ab4" protocol=ttrpc version=3 Sep 9 05:42:42.779355 containerd[1603]: time="2025-09-09T05:42:42.779051561Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:42.793404 systemd[1]: Started cri-containerd-9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade.scope - libcontainer container 9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade. Sep 9 05:42:42.797974 containerd[1603]: time="2025-09-09T05:42:42.797801874Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 05:42:42.799942 containerd[1603]: time="2025-09-09T05:42:42.799875954Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:42.809741 containerd[1603]: time="2025-09-09T05:42:42.809643639Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.836782491s" Sep 9 05:42:42.809741 containerd[1603]: time="2025-09-09T05:42:42.809719194Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 05:42:42.816962 containerd[1603]: time="2025-09-09T05:42:42.816873769Z" level=info msg="CreateContainer within sandbox \"d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 05:42:42.840400 containerd[1603]: time="2025-09-09T05:42:42.840322428Z" level=info msg="Container b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:42.858533 containerd[1603]: time="2025-09-09T05:42:42.858454893Z" level=info msg="CreateContainer within sandbox \"d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\"" Sep 9 05:42:42.861637 containerd[1603]: time="2025-09-09T05:42:42.861527956Z" level=info msg="StartContainer for \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\"" Sep 9 05:42:42.864258 containerd[1603]: time="2025-09-09T05:42:42.864124001Z" level=info msg="connecting to shim b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e" address="unix:///run/containerd/s/17e3c8a8ae32edddcddf06c813ccee898e0e53090e2befbe4443dfbe4c0f1af9" protocol=ttrpc version=3 Sep 9 05:42:42.915372 systemd[1]: Started cri-containerd-b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e.scope - libcontainer container b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e. Sep 9 05:42:42.916139 systemd[1]: cri-containerd-9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade.scope: Deactivated successfully. Sep 9 05:42:42.925681 containerd[1603]: time="2025-09-09T05:42:42.925586692Z" level=info msg="received exit event container_id:\"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\" id:\"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\" pid:3358 exited_at:{seconds:1757396562 nanos:924737218}" Sep 9 05:42:42.929721 containerd[1603]: time="2025-09-09T05:42:42.929572713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\" id:\"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\" pid:3358 exited_at:{seconds:1757396562 nanos:924737218}" Sep 9 05:42:42.930584 containerd[1603]: time="2025-09-09T05:42:42.930481625Z" level=info msg="StartContainer for \"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\" returns successfully" Sep 9 05:42:43.115145 containerd[1603]: time="2025-09-09T05:42:43.115049204Z" level=info msg="StartContainer for \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\" returns successfully" Sep 9 05:42:43.706501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2008927896.mount: Deactivated successfully. Sep 9 05:42:43.708002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade-rootfs.mount: Deactivated successfully. Sep 9 05:42:43.722650 containerd[1603]: time="2025-09-09T05:42:43.720155139Z" level=info msg="CreateContainer within sandbox \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:42:43.763902 containerd[1603]: time="2025-09-09T05:42:43.763835302Z" level=info msg="Container 384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:43.768803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079472617.mount: Deactivated successfully. Sep 9 05:42:43.785145 containerd[1603]: time="2025-09-09T05:42:43.785076217Z" level=info msg="CreateContainer within sandbox \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\"" Sep 9 05:42:43.788651 containerd[1603]: time="2025-09-09T05:42:43.786296583Z" level=info msg="StartContainer for \"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\"" Sep 9 05:42:43.790917 containerd[1603]: time="2025-09-09T05:42:43.790740776Z" level=info msg="connecting to shim 384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27" address="unix:///run/containerd/s/3f05c9e0dcb92fe6880e68968215ce27fefc09779fb3936d4f80a6f08ae57ab4" protocol=ttrpc version=3 Sep 9 05:42:43.860258 systemd[1]: Started cri-containerd-384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27.scope - libcontainer container 384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27. Sep 9 05:42:43.995403 kubelet[2838]: I0909 05:42:43.994540 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-wf2n7" podStartSLOduration=2.265837077 podStartE2EDuration="13.994505572s" podCreationTimestamp="2025-09-09 05:42:30 +0000 UTC" firstStartedPulling="2025-09-09 05:42:31.084085249 +0000 UTC m=+6.722074945" lastFinishedPulling="2025-09-09 05:42:42.812753739 +0000 UTC m=+18.450743440" observedRunningTime="2025-09-09 05:42:43.882827209 +0000 UTC m=+19.520816923" watchObservedRunningTime="2025-09-09 05:42:43.994505572 +0000 UTC m=+19.632495286" Sep 9 05:42:44.032805 systemd[1]: cri-containerd-384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27.scope: Deactivated successfully. Sep 9 05:42:44.040014 containerd[1603]: time="2025-09-09T05:42:44.039683565Z" level=info msg="StartContainer for \"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\" returns successfully" Sep 9 05:42:44.041487 containerd[1603]: time="2025-09-09T05:42:44.040550900Z" level=info msg="received exit event container_id:\"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\" id:\"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\" pid:3429 exited_at:{seconds:1757396564 nanos:39711567}" Sep 9 05:42:44.046518 containerd[1603]: time="2025-09-09T05:42:44.046470529Z" level=info msg="TaskExit event in podsandbox handler container_id:\"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\" id:\"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\" pid:3429 exited_at:{seconds:1757396564 nanos:39711567}" Sep 9 05:42:44.109845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27-rootfs.mount: Deactivated successfully. Sep 9 05:42:44.728900 containerd[1603]: time="2025-09-09T05:42:44.728829252Z" level=info msg="CreateContainer within sandbox \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:42:44.749762 containerd[1603]: time="2025-09-09T05:42:44.746770515Z" level=info msg="Container 7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:44.771066 containerd[1603]: time="2025-09-09T05:42:44.770978418Z" level=info msg="CreateContainer within sandbox \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\"" Sep 9 05:42:44.774276 containerd[1603]: time="2025-09-09T05:42:44.774210390Z" level=info msg="StartContainer for \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\"" Sep 9 05:42:44.778077 containerd[1603]: time="2025-09-09T05:42:44.778021690Z" level=info msg="connecting to shim 7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2" address="unix:///run/containerd/s/3f05c9e0dcb92fe6880e68968215ce27fefc09779fb3936d4f80a6f08ae57ab4" protocol=ttrpc version=3 Sep 9 05:42:44.836900 systemd[1]: Started cri-containerd-7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2.scope - libcontainer container 7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2. Sep 9 05:42:44.920173 containerd[1603]: time="2025-09-09T05:42:44.920112854Z" level=info msg="StartContainer for \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" returns successfully" Sep 9 05:42:45.189005 containerd[1603]: time="2025-09-09T05:42:45.188842982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" id:\"e9094c0074a0e6860127b0847d2517fcef0faa2919a7c14b5d65931c09c9f4c3\" pid:3496 exited_at:{seconds:1757396565 nanos:188238270}" Sep 9 05:42:45.257177 kubelet[2838]: I0909 05:42:45.256668 2838 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 05:42:45.321388 systemd[1]: Created slice kubepods-burstable-pod5e4db9c8_6662_4d1a_8f68_7229a8dfae93.slice - libcontainer container kubepods-burstable-pod5e4db9c8_6662_4d1a_8f68_7229a8dfae93.slice. Sep 9 05:42:45.358368 systemd[1]: Created slice kubepods-burstable-poda3d838ac_e80f_40a6_87d8_515aa0fa2dd4.slice - libcontainer container kubepods-burstable-poda3d838ac_e80f_40a6_87d8_515aa0fa2dd4.slice. Sep 9 05:42:45.408990 kubelet[2838]: I0909 05:42:45.408698 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zlrw\" (UniqueName: \"kubernetes.io/projected/5e4db9c8-6662-4d1a-8f68-7229a8dfae93-kube-api-access-5zlrw\") pod \"coredns-7c65d6cfc9-jxqf6\" (UID: \"5e4db9c8-6662-4d1a-8f68-7229a8dfae93\") " pod="kube-system/coredns-7c65d6cfc9-jxqf6" Sep 9 05:42:45.408990 kubelet[2838]: I0909 05:42:45.408785 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d838ac-e80f-40a6-87d8-515aa0fa2dd4-config-volume\") pod \"coredns-7c65d6cfc9-bdpcw\" (UID: \"a3d838ac-e80f-40a6-87d8-515aa0fa2dd4\") " pod="kube-system/coredns-7c65d6cfc9-bdpcw" Sep 9 05:42:45.408990 kubelet[2838]: I0909 05:42:45.408829 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d24fk\" (UniqueName: \"kubernetes.io/projected/a3d838ac-e80f-40a6-87d8-515aa0fa2dd4-kube-api-access-d24fk\") pod \"coredns-7c65d6cfc9-bdpcw\" (UID: \"a3d838ac-e80f-40a6-87d8-515aa0fa2dd4\") " pod="kube-system/coredns-7c65d6cfc9-bdpcw" Sep 9 05:42:45.408990 kubelet[2838]: I0909 05:42:45.408888 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e4db9c8-6662-4d1a-8f68-7229a8dfae93-config-volume\") pod \"coredns-7c65d6cfc9-jxqf6\" (UID: \"5e4db9c8-6662-4d1a-8f68-7229a8dfae93\") " pod="kube-system/coredns-7c65d6cfc9-jxqf6" Sep 9 05:42:45.638925 containerd[1603]: time="2025-09-09T05:42:45.638845507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jxqf6,Uid:5e4db9c8-6662-4d1a-8f68-7229a8dfae93,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:45.669780 containerd[1603]: time="2025-09-09T05:42:45.669614538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bdpcw,Uid:a3d838ac-e80f-40a6-87d8-515aa0fa2dd4,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:47.624502 systemd-networkd[1457]: cilium_host: Link UP Sep 9 05:42:47.624793 systemd-networkd[1457]: cilium_net: Link UP Sep 9 05:42:47.628253 systemd-networkd[1457]: cilium_net: Gained carrier Sep 9 05:42:47.628560 systemd-networkd[1457]: cilium_host: Gained carrier Sep 9 05:42:47.769356 systemd-networkd[1457]: cilium_vxlan: Link UP Sep 9 05:42:47.769372 systemd-networkd[1457]: cilium_vxlan: Gained carrier Sep 9 05:42:48.063692 kernel: NET: Registered PF_ALG protocol family Sep 9 05:42:48.356894 systemd-networkd[1457]: cilium_host: Gained IPv6LL Sep 9 05:42:48.483916 systemd-networkd[1457]: cilium_net: Gained IPv6LL Sep 9 05:42:48.984658 systemd-networkd[1457]: lxc_health: Link UP Sep 9 05:42:48.988716 systemd-networkd[1457]: lxc_health: Gained carrier Sep 9 05:42:49.248432 systemd-networkd[1457]: lxcb101f443e580: Link UP Sep 9 05:42:49.259255 kernel: eth0: renamed from tmpea225 Sep 9 05:42:49.253669 systemd-networkd[1457]: cilium_vxlan: Gained IPv6LL Sep 9 05:42:49.265159 systemd-networkd[1457]: lxcb101f443e580: Gained carrier Sep 9 05:42:49.696694 systemd-networkd[1457]: lxc9cc9a399e6bf: Link UP Sep 9 05:42:49.709639 kernel: eth0: renamed from tmpc94b2 Sep 9 05:42:49.716930 systemd-networkd[1457]: lxc9cc9a399e6bf: Gained carrier Sep 9 05:42:50.211996 systemd-networkd[1457]: lxc_health: Gained IPv6LL Sep 9 05:42:50.688310 kubelet[2838]: I0909 05:42:50.688122 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jlcz6" podStartSLOduration=12.573903272999999 podStartE2EDuration="20.688087195s" podCreationTimestamp="2025-09-09 05:42:30 +0000 UTC" firstStartedPulling="2025-09-09 05:42:30.857460753 +0000 UTC m=+6.495450444" lastFinishedPulling="2025-09-09 05:42:38.97164466 +0000 UTC m=+14.609634366" observedRunningTime="2025-09-09 05:42:45.772460408 +0000 UTC m=+21.410450121" watchObservedRunningTime="2025-09-09 05:42:50.688087195 +0000 UTC m=+26.326076910" Sep 9 05:42:50.787946 systemd-networkd[1457]: lxcb101f443e580: Gained IPv6LL Sep 9 05:42:51.172239 systemd-networkd[1457]: lxc9cc9a399e6bf: Gained IPv6LL Sep 9 05:42:53.365953 ntpd[1517]: Listen normally on 8 cilium_host 192.168.0.186:123 Sep 9 05:42:53.367284 ntpd[1517]: 9 Sep 05:42:53 ntpd[1517]: Listen normally on 8 cilium_host 192.168.0.186:123 Sep 9 05:42:53.367284 ntpd[1517]: 9 Sep 05:42:53 ntpd[1517]: Listen normally on 9 cilium_net [fe80::4c57:42ff:fe52:9263%4]:123 Sep 9 05:42:53.367284 ntpd[1517]: 9 Sep 05:42:53 ntpd[1517]: Listen normally on 10 cilium_host [fe80::a850:92ff:fe63:fb5b%5]:123 Sep 9 05:42:53.367284 ntpd[1517]: 9 Sep 05:42:53 ntpd[1517]: Listen normally on 11 cilium_vxlan [fe80::e070:48ff:fe02:b7f0%6]:123 Sep 9 05:42:53.367284 ntpd[1517]: 9 Sep 05:42:53 ntpd[1517]: Listen normally on 12 lxc_health [fe80::b808:e2ff:fee8:7df2%8]:123 Sep 9 05:42:53.367284 ntpd[1517]: 9 Sep 05:42:53 ntpd[1517]: Listen normally on 13 lxcb101f443e580 [fe80::70f5:abff:fe07:1448%10]:123 Sep 9 05:42:53.367284 ntpd[1517]: 9 Sep 05:42:53 ntpd[1517]: Listen normally on 14 lxc9cc9a399e6bf [fe80::9412:cff:fe28:9952%12]:123 Sep 9 05:42:53.366127 ntpd[1517]: Listen normally on 9 cilium_net [fe80::4c57:42ff:fe52:9263%4]:123 Sep 9 05:42:53.366222 ntpd[1517]: Listen normally on 10 cilium_host [fe80::a850:92ff:fe63:fb5b%5]:123 Sep 9 05:42:53.366281 ntpd[1517]: Listen normally on 11 cilium_vxlan [fe80::e070:48ff:fe02:b7f0%6]:123 Sep 9 05:42:53.366358 ntpd[1517]: Listen normally on 12 lxc_health [fe80::b808:e2ff:fee8:7df2%8]:123 Sep 9 05:42:53.366414 ntpd[1517]: Listen normally on 13 lxcb101f443e580 [fe80::70f5:abff:fe07:1448%10]:123 Sep 9 05:42:53.366473 ntpd[1517]: Listen normally on 14 lxc9cc9a399e6bf [fe80::9412:cff:fe28:9952%12]:123 Sep 9 05:42:54.733228 containerd[1603]: time="2025-09-09T05:42:54.733151515Z" level=info msg="connecting to shim ea225b6e2b3cb25791a071583e96bf0397e71f51bfcc4a14bc05fb48bb4a9f77" address="unix:///run/containerd/s/3db61695e653cd1ca0b9afe0b3de32fec2adb710b98c7c470baba073963f4ce6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:54.741004 containerd[1603]: time="2025-09-09T05:42:54.740877035Z" level=info msg="connecting to shim c94b2dc835adb702bd96cb1135407d5793482ed009c5cad73260a9bcbb24ddbf" address="unix:///run/containerd/s/ff7a32e6ca8184227d71392b90b18fe4baa5685c33c6430e598ad65056846156" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:54.842183 systemd[1]: Started cri-containerd-ea225b6e2b3cb25791a071583e96bf0397e71f51bfcc4a14bc05fb48bb4a9f77.scope - libcontainer container ea225b6e2b3cb25791a071583e96bf0397e71f51bfcc4a14bc05fb48bb4a9f77. Sep 9 05:42:54.856137 systemd[1]: Started cri-containerd-c94b2dc835adb702bd96cb1135407d5793482ed009c5cad73260a9bcbb24ddbf.scope - libcontainer container c94b2dc835adb702bd96cb1135407d5793482ed009c5cad73260a9bcbb24ddbf. Sep 9 05:42:54.981286 containerd[1603]: time="2025-09-09T05:42:54.981218160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bdpcw,Uid:a3d838ac-e80f-40a6-87d8-515aa0fa2dd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea225b6e2b3cb25791a071583e96bf0397e71f51bfcc4a14bc05fb48bb4a9f77\"" Sep 9 05:42:54.989195 containerd[1603]: time="2025-09-09T05:42:54.988989473Z" level=info msg="CreateContainer within sandbox \"ea225b6e2b3cb25791a071583e96bf0397e71f51bfcc4a14bc05fb48bb4a9f77\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:42:55.008029 containerd[1603]: time="2025-09-09T05:42:55.007844779Z" level=info msg="Container 9e3655ac539e1177c38d9fe9dfb885e2ec23c61641d45a8833473a8edd2fce34: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:55.027828 containerd[1603]: time="2025-09-09T05:42:55.027755679Z" level=info msg="CreateContainer within sandbox \"ea225b6e2b3cb25791a071583e96bf0397e71f51bfcc4a14bc05fb48bb4a9f77\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e3655ac539e1177c38d9fe9dfb885e2ec23c61641d45a8833473a8edd2fce34\"" Sep 9 05:42:55.029196 containerd[1603]: time="2025-09-09T05:42:55.029142243Z" level=info msg="StartContainer for \"9e3655ac539e1177c38d9fe9dfb885e2ec23c61641d45a8833473a8edd2fce34\"" Sep 9 05:42:55.030548 containerd[1603]: time="2025-09-09T05:42:55.030502430Z" level=info msg="connecting to shim 9e3655ac539e1177c38d9fe9dfb885e2ec23c61641d45a8833473a8edd2fce34" address="unix:///run/containerd/s/3db61695e653cd1ca0b9afe0b3de32fec2adb710b98c7c470baba073963f4ce6" protocol=ttrpc version=3 Sep 9 05:42:55.050859 containerd[1603]: time="2025-09-09T05:42:55.050674307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jxqf6,Uid:5e4db9c8-6662-4d1a-8f68-7229a8dfae93,Namespace:kube-system,Attempt:0,} returns sandbox id \"c94b2dc835adb702bd96cb1135407d5793482ed009c5cad73260a9bcbb24ddbf\"" Sep 9 05:42:55.056337 containerd[1603]: time="2025-09-09T05:42:55.056287454Z" level=info msg="CreateContainer within sandbox \"c94b2dc835adb702bd96cb1135407d5793482ed009c5cad73260a9bcbb24ddbf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:42:55.072667 containerd[1603]: time="2025-09-09T05:42:55.072535422Z" level=info msg="Container 716c4c803afbb0b7ae269153f35a3ff936ac7f18da24575178902113ecd9670f: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:55.074883 systemd[1]: Started cri-containerd-9e3655ac539e1177c38d9fe9dfb885e2ec23c61641d45a8833473a8edd2fce34.scope - libcontainer container 9e3655ac539e1177c38d9fe9dfb885e2ec23c61641d45a8833473a8edd2fce34. Sep 9 05:42:55.085221 containerd[1603]: time="2025-09-09T05:42:55.085057066Z" level=info msg="CreateContainer within sandbox \"c94b2dc835adb702bd96cb1135407d5793482ed009c5cad73260a9bcbb24ddbf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"716c4c803afbb0b7ae269153f35a3ff936ac7f18da24575178902113ecd9670f\"" Sep 9 05:42:55.087628 containerd[1603]: time="2025-09-09T05:42:55.087545358Z" level=info msg="StartContainer for \"716c4c803afbb0b7ae269153f35a3ff936ac7f18da24575178902113ecd9670f\"" Sep 9 05:42:55.091128 containerd[1603]: time="2025-09-09T05:42:55.091073134Z" level=info msg="connecting to shim 716c4c803afbb0b7ae269153f35a3ff936ac7f18da24575178902113ecd9670f" address="unix:///run/containerd/s/ff7a32e6ca8184227d71392b90b18fe4baa5685c33c6430e598ad65056846156" protocol=ttrpc version=3 Sep 9 05:42:55.130234 systemd[1]: Started cri-containerd-716c4c803afbb0b7ae269153f35a3ff936ac7f18da24575178902113ecd9670f.scope - libcontainer container 716c4c803afbb0b7ae269153f35a3ff936ac7f18da24575178902113ecd9670f. Sep 9 05:42:55.164287 containerd[1603]: time="2025-09-09T05:42:55.164225232Z" level=info msg="StartContainer for \"9e3655ac539e1177c38d9fe9dfb885e2ec23c61641d45a8833473a8edd2fce34\" returns successfully" Sep 9 05:42:55.217543 containerd[1603]: time="2025-09-09T05:42:55.216283353Z" level=info msg="StartContainer for \"716c4c803afbb0b7ae269153f35a3ff936ac7f18da24575178902113ecd9670f\" returns successfully" Sep 9 05:42:55.671298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4180012564.mount: Deactivated successfully. Sep 9 05:42:55.813677 kubelet[2838]: I0909 05:42:55.813572 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jxqf6" podStartSLOduration=25.813542009 podStartE2EDuration="25.813542009s" podCreationTimestamp="2025-09-09 05:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:42:55.812993836 +0000 UTC m=+31.450983567" watchObservedRunningTime="2025-09-09 05:42:55.813542009 +0000 UTC m=+31.451531722" Sep 9 05:42:55.887757 kubelet[2838]: I0909 05:42:55.886587 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bdpcw" podStartSLOduration=25.886550252 podStartE2EDuration="25.886550252s" podCreationTimestamp="2025-09-09 05:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:42:55.885832567 +0000 UTC m=+31.523822281" watchObservedRunningTime="2025-09-09 05:42:55.886550252 +0000 UTC m=+31.524539966" Sep 9 05:43:44.976457 systemd[1]: Started sshd@9-10.128.0.68:22-139.178.89.65:57338.service - OpenSSH per-connection server daemon (139.178.89.65:57338). Sep 9 05:43:45.293750 sshd[4138]: Accepted publickey for core from 139.178.89.65 port 57338 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:43:45.296003 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:45.304677 systemd-logind[1533]: New session 10 of user core. Sep 9 05:43:45.310871 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:43:45.626714 sshd[4141]: Connection closed by 139.178.89.65 port 57338 Sep 9 05:43:45.628904 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:45.635556 systemd[1]: sshd@9-10.128.0.68:22-139.178.89.65:57338.service: Deactivated successfully. Sep 9 05:43:45.640791 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:43:45.642696 systemd-logind[1533]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:43:45.645227 systemd-logind[1533]: Removed session 10. Sep 9 05:43:50.690030 systemd[1]: Started sshd@10-10.128.0.68:22-139.178.89.65:41060.service - OpenSSH per-connection server daemon (139.178.89.65:41060). Sep 9 05:43:51.002527 sshd[4154]: Accepted publickey for core from 139.178.89.65 port 41060 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:43:51.004238 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:51.011398 systemd-logind[1533]: New session 11 of user core. Sep 9 05:43:51.022853 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:43:51.292885 sshd[4157]: Connection closed by 139.178.89.65 port 41060 Sep 9 05:43:51.294135 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:51.300347 systemd[1]: sshd@10-10.128.0.68:22-139.178.89.65:41060.service: Deactivated successfully. Sep 9 05:43:51.304229 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:43:51.305887 systemd-logind[1533]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:43:51.308458 systemd-logind[1533]: Removed session 11. Sep 9 05:43:56.351585 systemd[1]: Started sshd@11-10.128.0.68:22-139.178.89.65:41076.service - OpenSSH per-connection server daemon (139.178.89.65:41076). Sep 9 05:43:56.657348 sshd[4170]: Accepted publickey for core from 139.178.89.65 port 41076 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:43:56.659175 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:56.665677 systemd-logind[1533]: New session 12 of user core. Sep 9 05:43:56.672811 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:43:56.945064 sshd[4173]: Connection closed by 139.178.89.65 port 41076 Sep 9 05:43:56.946462 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:56.952820 systemd-logind[1533]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:43:56.953623 systemd[1]: sshd@11-10.128.0.68:22-139.178.89.65:41076.service: Deactivated successfully. Sep 9 05:43:56.958018 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:43:56.960554 systemd-logind[1533]: Removed session 12. Sep 9 05:44:02.001427 systemd[1]: Started sshd@12-10.128.0.68:22-139.178.89.65:37696.service - OpenSSH per-connection server daemon (139.178.89.65:37696). Sep 9 05:44:02.310348 sshd[4190]: Accepted publickey for core from 139.178.89.65 port 37696 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:02.312423 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:02.320699 systemd-logind[1533]: New session 13 of user core. Sep 9 05:44:02.325841 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:44:02.615895 sshd[4193]: Connection closed by 139.178.89.65 port 37696 Sep 9 05:44:02.617537 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:02.623282 systemd[1]: sshd@12-10.128.0.68:22-139.178.89.65:37696.service: Deactivated successfully. Sep 9 05:44:02.626880 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:44:02.628945 systemd-logind[1533]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:44:02.631415 systemd-logind[1533]: Removed session 13. Sep 9 05:44:07.675150 systemd[1]: Started sshd@13-10.128.0.68:22-139.178.89.65:37710.service - OpenSSH per-connection server daemon (139.178.89.65:37710). Sep 9 05:44:07.985856 sshd[4206]: Accepted publickey for core from 139.178.89.65 port 37710 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:07.987281 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:07.993816 systemd-logind[1533]: New session 14 of user core. Sep 9 05:44:08.000790 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:44:08.281192 sshd[4209]: Connection closed by 139.178.89.65 port 37710 Sep 9 05:44:08.282162 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:08.287387 systemd[1]: sshd@13-10.128.0.68:22-139.178.89.65:37710.service: Deactivated successfully. Sep 9 05:44:08.291155 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:44:08.293692 systemd-logind[1533]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:44:08.295903 systemd-logind[1533]: Removed session 14. Sep 9 05:44:08.334362 systemd[1]: Started sshd@14-10.128.0.68:22-139.178.89.65:37724.service - OpenSSH per-connection server daemon (139.178.89.65:37724). Sep 9 05:44:08.645045 sshd[4222]: Accepted publickey for core from 139.178.89.65 port 37724 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:08.646859 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:08.653679 systemd-logind[1533]: New session 15 of user core. Sep 9 05:44:08.661809 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:44:08.978733 sshd[4225]: Connection closed by 139.178.89.65 port 37724 Sep 9 05:44:08.979318 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:08.986680 systemd[1]: sshd@14-10.128.0.68:22-139.178.89.65:37724.service: Deactivated successfully. Sep 9 05:44:08.990365 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:44:08.991759 systemd-logind[1533]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:44:08.994187 systemd-logind[1533]: Removed session 15. Sep 9 05:44:09.038395 systemd[1]: Started sshd@15-10.128.0.68:22-139.178.89.65:37734.service - OpenSSH per-connection server daemon (139.178.89.65:37734). Sep 9 05:44:09.357071 sshd[4235]: Accepted publickey for core from 139.178.89.65 port 37734 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:09.358912 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:09.365214 systemd-logind[1533]: New session 16 of user core. Sep 9 05:44:09.375822 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:44:09.647822 sshd[4238]: Connection closed by 139.178.89.65 port 37734 Sep 9 05:44:09.648634 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:09.654933 systemd[1]: sshd@15-10.128.0.68:22-139.178.89.65:37734.service: Deactivated successfully. Sep 9 05:44:09.658189 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:44:09.659683 systemd-logind[1533]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:44:09.662049 systemd-logind[1533]: Removed session 16. Sep 9 05:44:14.718541 systemd[1]: Started sshd@16-10.128.0.68:22-139.178.89.65:56822.service - OpenSSH per-connection server daemon (139.178.89.65:56822). Sep 9 05:44:15.025284 sshd[4250]: Accepted publickey for core from 139.178.89.65 port 56822 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:15.027080 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:15.033685 systemd-logind[1533]: New session 17 of user core. Sep 9 05:44:15.038786 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:44:15.342565 sshd[4253]: Connection closed by 139.178.89.65 port 56822 Sep 9 05:44:15.343785 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:15.349795 systemd-logind[1533]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:44:15.351205 systemd[1]: sshd@16-10.128.0.68:22-139.178.89.65:56822.service: Deactivated successfully. Sep 9 05:44:15.354273 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:44:15.356803 systemd-logind[1533]: Removed session 17. Sep 9 05:44:20.397655 systemd[1]: Started sshd@17-10.128.0.68:22-139.178.89.65:40024.service - OpenSSH per-connection server daemon (139.178.89.65:40024). Sep 9 05:44:20.707404 sshd[4267]: Accepted publickey for core from 139.178.89.65 port 40024 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:20.709338 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:20.717323 systemd-logind[1533]: New session 18 of user core. Sep 9 05:44:20.724833 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:44:21.001325 sshd[4270]: Connection closed by 139.178.89.65 port 40024 Sep 9 05:44:21.002275 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:21.008431 systemd[1]: sshd@17-10.128.0.68:22-139.178.89.65:40024.service: Deactivated successfully. Sep 9 05:44:21.011526 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:44:21.013436 systemd-logind[1533]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:44:21.015745 systemd-logind[1533]: Removed session 18. Sep 9 05:44:26.056829 systemd[1]: Started sshd@18-10.128.0.68:22-139.178.89.65:40032.service - OpenSSH per-connection server daemon (139.178.89.65:40032). Sep 9 05:44:26.363362 sshd[4285]: Accepted publickey for core from 139.178.89.65 port 40032 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:26.365135 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:26.372565 systemd-logind[1533]: New session 19 of user core. Sep 9 05:44:26.377809 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:44:26.656648 sshd[4288]: Connection closed by 139.178.89.65 port 40032 Sep 9 05:44:26.657715 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:26.663821 systemd[1]: sshd@18-10.128.0.68:22-139.178.89.65:40032.service: Deactivated successfully. Sep 9 05:44:26.667028 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:44:26.669248 systemd-logind[1533]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:44:26.671317 systemd-logind[1533]: Removed session 19. Sep 9 05:44:26.712217 systemd[1]: Started sshd@19-10.128.0.68:22-139.178.89.65:40034.service - OpenSSH per-connection server daemon (139.178.89.65:40034). Sep 9 05:44:27.020186 sshd[4300]: Accepted publickey for core from 139.178.89.65 port 40034 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:27.022160 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:27.028915 systemd-logind[1533]: New session 20 of user core. Sep 9 05:44:27.033829 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:44:27.383970 sshd[4303]: Connection closed by 139.178.89.65 port 40034 Sep 9 05:44:27.385306 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:27.391237 systemd[1]: sshd@19-10.128.0.68:22-139.178.89.65:40034.service: Deactivated successfully. Sep 9 05:44:27.393808 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:44:27.395551 systemd-logind[1533]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:44:27.397955 systemd-logind[1533]: Removed session 20. Sep 9 05:44:27.438662 systemd[1]: Started sshd@20-10.128.0.68:22-139.178.89.65:40046.service - OpenSSH per-connection server daemon (139.178.89.65:40046). Sep 9 05:44:27.743622 sshd[4312]: Accepted publickey for core from 139.178.89.65 port 40046 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:27.744430 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:27.750645 systemd-logind[1533]: New session 21 of user core. Sep 9 05:44:27.758856 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:44:29.688654 sshd[4315]: Connection closed by 139.178.89.65 port 40046 Sep 9 05:44:29.690107 sshd-session[4312]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:29.704171 systemd[1]: sshd@20-10.128.0.68:22-139.178.89.65:40046.service: Deactivated successfully. Sep 9 05:44:29.705686 systemd-logind[1533]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:44:29.712426 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:44:29.721373 systemd-logind[1533]: Removed session 21. Sep 9 05:44:29.751813 systemd[1]: Started sshd@21-10.128.0.68:22-139.178.89.65:40058.service - OpenSSH per-connection server daemon (139.178.89.65:40058). Sep 9 05:44:30.065668 sshd[4333]: Accepted publickey for core from 139.178.89.65 port 40058 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:30.066995 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:30.075659 systemd-logind[1533]: New session 22 of user core. Sep 9 05:44:30.082892 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:44:30.490527 sshd[4336]: Connection closed by 139.178.89.65 port 40058 Sep 9 05:44:30.491789 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:30.498225 systemd[1]: sshd@21-10.128.0.68:22-139.178.89.65:40058.service: Deactivated successfully. Sep 9 05:44:30.501743 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:44:30.503154 systemd-logind[1533]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:44:30.505431 systemd-logind[1533]: Removed session 22. Sep 9 05:44:30.549253 systemd[1]: Started sshd@22-10.128.0.68:22-139.178.89.65:57602.service - OpenSSH per-connection server daemon (139.178.89.65:57602). Sep 9 05:44:30.858638 sshd[4346]: Accepted publickey for core from 139.178.89.65 port 57602 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:30.859024 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:30.866694 systemd-logind[1533]: New session 23 of user core. Sep 9 05:44:30.873924 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:44:31.150081 sshd[4349]: Connection closed by 139.178.89.65 port 57602 Sep 9 05:44:31.152118 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:31.157274 systemd[1]: sshd@22-10.128.0.68:22-139.178.89.65:57602.service: Deactivated successfully. Sep 9 05:44:31.161160 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:44:31.164742 systemd-logind[1533]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:44:31.166384 systemd-logind[1533]: Removed session 23. Sep 9 05:44:36.215543 systemd[1]: Started sshd@23-10.128.0.68:22-139.178.89.65:57618.service - OpenSSH per-connection server daemon (139.178.89.65:57618). Sep 9 05:44:36.518643 sshd[4363]: Accepted publickey for core from 139.178.89.65 port 57618 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:36.520726 sshd-session[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:36.529141 systemd-logind[1533]: New session 24 of user core. Sep 9 05:44:36.534628 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:44:36.821503 sshd[4366]: Connection closed by 139.178.89.65 port 57618 Sep 9 05:44:36.822539 sshd-session[4363]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:36.829584 systemd[1]: sshd@23-10.128.0.68:22-139.178.89.65:57618.service: Deactivated successfully. Sep 9 05:44:36.833163 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:44:36.835116 systemd-logind[1533]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:44:36.837376 systemd-logind[1533]: Removed session 24. Sep 9 05:44:41.880746 systemd[1]: Started sshd@24-10.128.0.68:22-139.178.89.65:46240.service - OpenSSH per-connection server daemon (139.178.89.65:46240). Sep 9 05:44:42.199739 sshd[4381]: Accepted publickey for core from 139.178.89.65 port 46240 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:42.201085 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:42.207678 systemd-logind[1533]: New session 25 of user core. Sep 9 05:44:42.214821 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 05:44:42.495112 sshd[4386]: Connection closed by 139.178.89.65 port 46240 Sep 9 05:44:42.495660 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:42.502016 systemd[1]: sshd@24-10.128.0.68:22-139.178.89.65:46240.service: Deactivated successfully. Sep 9 05:44:42.505287 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 05:44:42.506706 systemd-logind[1533]: Session 25 logged out. Waiting for processes to exit. Sep 9 05:44:42.509055 systemd-logind[1533]: Removed session 25. Sep 9 05:44:47.551169 systemd[1]: Started sshd@25-10.128.0.68:22-139.178.89.65:46244.service - OpenSSH per-connection server daemon (139.178.89.65:46244). Sep 9 05:44:47.855579 sshd[4399]: Accepted publickey for core from 139.178.89.65 port 46244 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:47.857325 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:47.865662 systemd-logind[1533]: New session 26 of user core. Sep 9 05:44:47.873845 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 05:44:48.145073 sshd[4402]: Connection closed by 139.178.89.65 port 46244 Sep 9 05:44:48.146887 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:48.152097 systemd[1]: sshd@25-10.128.0.68:22-139.178.89.65:46244.service: Deactivated successfully. Sep 9 05:44:48.156481 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 05:44:48.159148 systemd-logind[1533]: Session 26 logged out. Waiting for processes to exit. Sep 9 05:44:48.161286 systemd-logind[1533]: Removed session 26. Sep 9 05:44:53.204574 systemd[1]: Started sshd@26-10.128.0.68:22-139.178.89.65:52830.service - OpenSSH per-connection server daemon (139.178.89.65:52830). Sep 9 05:44:53.511616 sshd[4414]: Accepted publickey for core from 139.178.89.65 port 52830 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:53.514318 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:53.522817 systemd-logind[1533]: New session 27 of user core. Sep 9 05:44:53.533812 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 05:44:53.801900 sshd[4417]: Connection closed by 139.178.89.65 port 52830 Sep 9 05:44:53.802953 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:53.809518 systemd[1]: sshd@26-10.128.0.68:22-139.178.89.65:52830.service: Deactivated successfully. Sep 9 05:44:53.812780 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 05:44:53.814692 systemd-logind[1533]: Session 27 logged out. Waiting for processes to exit. Sep 9 05:44:53.817018 systemd-logind[1533]: Removed session 27. Sep 9 05:44:53.855931 systemd[1]: Started sshd@27-10.128.0.68:22-139.178.89.65:52838.service - OpenSSH per-connection server daemon (139.178.89.65:52838). Sep 9 05:44:54.159388 sshd[4429]: Accepted publickey for core from 139.178.89.65 port 52838 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:54.161301 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:54.168547 systemd-logind[1533]: New session 28 of user core. Sep 9 05:44:54.177786 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 05:44:56.236537 containerd[1603]: time="2025-09-09T05:44:56.234938718Z" level=info msg="StopContainer for \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\" with timeout 30 (s)" Sep 9 05:44:56.237917 containerd[1603]: time="2025-09-09T05:44:56.237768923Z" level=info msg="Stop container \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\" with signal terminated" Sep 9 05:44:56.265852 containerd[1603]: time="2025-09-09T05:44:56.265794502Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:44:56.267534 systemd[1]: cri-containerd-b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e.scope: Deactivated successfully. Sep 9 05:44:56.275170 containerd[1603]: time="2025-09-09T05:44:56.275019907Z" level=info msg="received exit event container_id:\"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\" id:\"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\" pid:3384 exited_at:{seconds:1757396696 nanos:274684046}" Sep 9 05:44:56.275170 containerd[1603]: time="2025-09-09T05:44:56.275064756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\" id:\"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\" pid:3384 exited_at:{seconds:1757396696 nanos:274684046}" Sep 9 05:44:56.281885 containerd[1603]: time="2025-09-09T05:44:56.281443982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" id:\"43aa9b7dcacd7f04b7463026f74b2aaf2ab7ccb18c45fb4eb2652b348609dc39\" pid:4456 exited_at:{seconds:1757396696 nanos:280576195}" Sep 9 05:44:56.288778 containerd[1603]: time="2025-09-09T05:44:56.288661465Z" level=info msg="StopContainer for \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" with timeout 2 (s)" Sep 9 05:44:56.289362 containerd[1603]: time="2025-09-09T05:44:56.289315775Z" level=info msg="Stop container \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" with signal terminated" Sep 9 05:44:56.305368 systemd-networkd[1457]: lxc_health: Link DOWN Sep 9 05:44:56.307507 systemd-networkd[1457]: lxc_health: Lost carrier Sep 9 05:44:56.327486 systemd[1]: cri-containerd-7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2.scope: Deactivated successfully. Sep 9 05:44:56.328909 systemd[1]: cri-containerd-7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2.scope: Consumed 10.174s CPU time, 127M memory peak, 120K read from disk, 13.3M written to disk. Sep 9 05:44:56.335054 containerd[1603]: time="2025-09-09T05:44:56.334998721Z" level=info msg="received exit event container_id:\"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" id:\"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" pid:3467 exited_at:{seconds:1757396696 nanos:334246215}" Sep 9 05:44:56.336672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e-rootfs.mount: Deactivated successfully. Sep 9 05:44:56.337001 containerd[1603]: time="2025-09-09T05:44:56.336894160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" id:\"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" pid:3467 exited_at:{seconds:1757396696 nanos:334246215}" Sep 9 05:44:56.369195 containerd[1603]: time="2025-09-09T05:44:56.369138659Z" level=info msg="StopContainer for \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\" returns successfully" Sep 9 05:44:56.370198 containerd[1603]: time="2025-09-09T05:44:56.370092011Z" level=info msg="StopPodSandbox for \"d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398\"" Sep 9 05:44:56.370198 containerd[1603]: time="2025-09-09T05:44:56.370185721Z" level=info msg="Container to stop \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:44:56.379259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2-rootfs.mount: Deactivated successfully. Sep 9 05:44:56.389096 systemd[1]: cri-containerd-d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398.scope: Deactivated successfully. Sep 9 05:44:56.391320 containerd[1603]: time="2025-09-09T05:44:56.390687710Z" level=info msg="StopContainer for \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" returns successfully" Sep 9 05:44:56.392129 containerd[1603]: time="2025-09-09T05:44:56.391476755Z" level=info msg="StopPodSandbox for \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\"" Sep 9 05:44:56.392129 containerd[1603]: time="2025-09-09T05:44:56.391564501Z" level=info msg="Container to stop \"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:44:56.392129 containerd[1603]: time="2025-09-09T05:44:56.391586649Z" level=info msg="Container to stop \"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:44:56.392129 containerd[1603]: time="2025-09-09T05:44:56.391682373Z" level=info msg="Container to stop \"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:44:56.392129 containerd[1603]: time="2025-09-09T05:44:56.391700113Z" level=info msg="Container to stop \"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:44:56.392129 containerd[1603]: time="2025-09-09T05:44:56.391716812Z" level=info msg="Container to stop \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:44:56.393504 containerd[1603]: time="2025-09-09T05:44:56.393304169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398\" id:\"d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398\" pid:3055 exit_status:137 exited_at:{seconds:1757396696 nanos:392073280}" Sep 9 05:44:56.410838 systemd[1]: cri-containerd-fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024.scope: Deactivated successfully. Sep 9 05:44:56.470776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398-rootfs.mount: Deactivated successfully. Sep 9 05:44:56.475250 containerd[1603]: time="2025-09-09T05:44:56.475192395Z" level=info msg="shim disconnected" id=d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398 namespace=k8s.io Sep 9 05:44:56.475250 containerd[1603]: time="2025-09-09T05:44:56.475248395Z" level=warning msg="cleaning up after shim disconnected" id=d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398 namespace=k8s.io Sep 9 05:44:56.475485 containerd[1603]: time="2025-09-09T05:44:56.475262236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:44:56.475797 containerd[1603]: time="2025-09-09T05:44:56.475653605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" id:\"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" pid:2985 exit_status:137 exited_at:{seconds:1757396696 nanos:416366484}" Sep 9 05:44:56.476522 containerd[1603]: time="2025-09-09T05:44:56.476484367Z" level=info msg="received exit event sandbox_id:\"d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398\" exit_status:137 exited_at:{seconds:1757396696 nanos:392073280}" Sep 9 05:44:56.478850 containerd[1603]: time="2025-09-09T05:44:56.478814236Z" level=info msg="TearDown network for sandbox \"d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398\" successfully" Sep 9 05:44:56.480678 containerd[1603]: time="2025-09-09T05:44:56.480642009Z" level=info msg="StopPodSandbox for \"d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398\" returns successfully" Sep 9 05:44:56.484311 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3e0a22251deff2d7c6b8897b86665a0dfd253f01b2d216b2be01e94f68b5398-shm.mount: Deactivated successfully. Sep 9 05:44:56.503138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024-rootfs.mount: Deactivated successfully. Sep 9 05:44:56.509631 containerd[1603]: time="2025-09-09T05:44:56.509223129Z" level=info msg="received exit event sandbox_id:\"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" exit_status:137 exited_at:{seconds:1757396696 nanos:416366484}" Sep 9 05:44:56.512950 containerd[1603]: time="2025-09-09T05:44:56.512886250Z" level=info msg="shim disconnected" id=fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024 namespace=k8s.io Sep 9 05:44:56.513060 containerd[1603]: time="2025-09-09T05:44:56.512950878Z" level=warning msg="cleaning up after shim disconnected" id=fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024 namespace=k8s.io Sep 9 05:44:56.513060 containerd[1603]: time="2025-09-09T05:44:56.512965392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:44:56.517813 containerd[1603]: time="2025-09-09T05:44:56.516530916Z" level=info msg="TearDown network for sandbox \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" successfully" Sep 9 05:44:56.517813 containerd[1603]: time="2025-09-09T05:44:56.516568360Z" level=info msg="StopPodSandbox for \"fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024\" returns successfully" Sep 9 05:44:56.633187 kubelet[2838]: I0909 05:44:56.633083 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgbvn\" (UniqueName: \"kubernetes.io/projected/f60740ac-0181-4dbd-a5af-01ea43447a12-kube-api-access-dgbvn\") pod \"f60740ac-0181-4dbd-a5af-01ea43447a12\" (UID: \"f60740ac-0181-4dbd-a5af-01ea43447a12\") " Sep 9 05:44:56.633187 kubelet[2838]: I0909 05:44:56.633161 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f60740ac-0181-4dbd-a5af-01ea43447a12-cilium-config-path\") pod \"f60740ac-0181-4dbd-a5af-01ea43447a12\" (UID: \"f60740ac-0181-4dbd-a5af-01ea43447a12\") " Sep 9 05:44:56.636613 kubelet[2838]: I0909 05:44:56.636533 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f60740ac-0181-4dbd-a5af-01ea43447a12-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f60740ac-0181-4dbd-a5af-01ea43447a12" (UID: "f60740ac-0181-4dbd-a5af-01ea43447a12"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 05:44:56.638771 kubelet[2838]: I0909 05:44:56.638729 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60740ac-0181-4dbd-a5af-01ea43447a12-kube-api-access-dgbvn" (OuterVolumeSpecName: "kube-api-access-dgbvn") pod "f60740ac-0181-4dbd-a5af-01ea43447a12" (UID: "f60740ac-0181-4dbd-a5af-01ea43447a12"). InnerVolumeSpecName "kube-api-access-dgbvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 05:44:56.734373 kubelet[2838]: I0909 05:44:56.734303 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-etc-cni-netd\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.734373 kubelet[2838]: I0909 05:44:56.734389 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cni-path\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.734761 kubelet[2838]: I0909 05:44:56.734415 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-run\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.734761 kubelet[2838]: I0909 05:44:56.734439 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-cgroup\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.734761 kubelet[2838]: I0909 05:44:56.734463 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-hostproc\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.734761 kubelet[2838]: I0909 05:44:56.734484 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-host-proc-sys-net\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.734761 kubelet[2838]: I0909 05:44:56.734516 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f77fb038-4f2b-4b28-8204-d4bce19f956c-clustermesh-secrets\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.734761 kubelet[2838]: I0909 05:44:56.734545 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-config-path\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.735040 kubelet[2838]: I0909 05:44:56.734570 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-xtables-lock\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.735040 kubelet[2838]: I0909 05:44:56.734631 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-lib-modules\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.735040 kubelet[2838]: I0909 05:44:56.734656 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-bpf-maps\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.735040 kubelet[2838]: I0909 05:44:56.734686 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd57j\" (UniqueName: \"kubernetes.io/projected/f77fb038-4f2b-4b28-8204-d4bce19f956c-kube-api-access-vd57j\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.735040 kubelet[2838]: I0909 05:44:56.734711 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-host-proc-sys-kernel\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.735040 kubelet[2838]: I0909 05:44:56.734740 2838 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f77fb038-4f2b-4b28-8204-d4bce19f956c-hubble-tls\") pod \"f77fb038-4f2b-4b28-8204-d4bce19f956c\" (UID: \"f77fb038-4f2b-4b28-8204-d4bce19f956c\") " Sep 9 05:44:56.735421 kubelet[2838]: I0909 05:44:56.734806 2838 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f60740ac-0181-4dbd-a5af-01ea43447a12-cilium-config-path\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.735421 kubelet[2838]: I0909 05:44:56.734828 2838 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgbvn\" (UniqueName: \"kubernetes.io/projected/f60740ac-0181-4dbd-a5af-01ea43447a12-kube-api-access-dgbvn\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.737438 kubelet[2838]: I0909 05:44:56.737382 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:44:56.737658 kubelet[2838]: I0909 05:44:56.737624 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:44:56.737777 kubelet[2838]: I0909 05:44:56.737756 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:44:56.737852 kubelet[2838]: I0909 05:44:56.737667 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cni-path" (OuterVolumeSpecName: "cni-path") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:44:56.737852 kubelet[2838]: I0909 05:44:56.737702 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:44:56.737852 kubelet[2838]: I0909 05:44:56.737719 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:44:56.737852 kubelet[2838]: I0909 05:44:56.737739 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-hostproc" (OuterVolumeSpecName: "hostproc") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:44:56.738095 kubelet[2838]: I0909 05:44:56.738070 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:44:56.739127 kubelet[2838]: I0909 05:44:56.738772 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:44:56.740257 kubelet[2838]: I0909 05:44:56.740151 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:44:56.742840 kubelet[2838]: I0909 05:44:56.742808 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77fb038-4f2b-4b28-8204-d4bce19f956c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 05:44:56.745578 kubelet[2838]: I0909 05:44:56.745546 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77fb038-4f2b-4b28-8204-d4bce19f956c-kube-api-access-vd57j" (OuterVolumeSpecName: "kube-api-access-vd57j") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "kube-api-access-vd57j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 05:44:56.746132 kubelet[2838]: I0909 05:44:56.746099 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 05:44:56.746575 kubelet[2838]: I0909 05:44:56.746532 2838 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77fb038-4f2b-4b28-8204-d4bce19f956c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f77fb038-4f2b-4b28-8204-d4bce19f956c" (UID: "f77fb038-4f2b-4b28-8204-d4bce19f956c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 05:44:56.835865 kubelet[2838]: I0909 05:44:56.835708 2838 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f77fb038-4f2b-4b28-8204-d4bce19f956c-hubble-tls\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.835865 kubelet[2838]: I0909 05:44:56.835768 2838 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-etc-cni-netd\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.835865 kubelet[2838]: I0909 05:44:56.835792 2838 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cni-path\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.835865 kubelet[2838]: I0909 05:44:56.835809 2838 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-run\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.835865 kubelet[2838]: I0909 05:44:56.835824 2838 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-cgroup\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.835865 kubelet[2838]: I0909 05:44:56.835840 2838 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-hostproc\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.835865 kubelet[2838]: I0909 05:44:56.835857 2838 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-host-proc-sys-net\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.836316 kubelet[2838]: I0909 05:44:56.835873 2838 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f77fb038-4f2b-4b28-8204-d4bce19f956c-clustermesh-secrets\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.836316 kubelet[2838]: I0909 05:44:56.835910 2838 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f77fb038-4f2b-4b28-8204-d4bce19f956c-cilium-config-path\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.836316 kubelet[2838]: I0909 05:44:56.835924 2838 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-xtables-lock\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.836316 kubelet[2838]: I0909 05:44:56.835941 2838 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-lib-modules\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.836316 kubelet[2838]: I0909 05:44:56.835956 2838 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-bpf-maps\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.836316 kubelet[2838]: I0909 05:44:56.835972 2838 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd57j\" (UniqueName: \"kubernetes.io/projected/f77fb038-4f2b-4b28-8204-d4bce19f956c-kube-api-access-vd57j\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:56.836316 kubelet[2838]: I0909 05:44:56.835987 2838 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f77fb038-4f2b-4b28-8204-d4bce19f956c-host-proc-sys-kernel\") on node \"ci-4452-0-0-nightly-20250908-2100-732ad6f6796819f571d5\" DevicePath \"\"" Sep 9 05:44:57.122336 kubelet[2838]: I0909 05:44:57.120802 2838 scope.go:117] "RemoveContainer" containerID="b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e" Sep 9 05:44:57.127005 containerd[1603]: time="2025-09-09T05:44:57.126910185Z" level=info msg="RemoveContainer for \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\"" Sep 9 05:44:57.135918 systemd[1]: Removed slice kubepods-besteffort-podf60740ac_0181_4dbd_a5af_01ea43447a12.slice - libcontainer container kubepods-besteffort-podf60740ac_0181_4dbd_a5af_01ea43447a12.slice. Sep 9 05:44:57.137196 containerd[1603]: time="2025-09-09T05:44:57.136740930Z" level=info msg="RemoveContainer for \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\" returns successfully" Sep 9 05:44:57.137308 kubelet[2838]: I0909 05:44:57.137241 2838 scope.go:117] "RemoveContainer" containerID="b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e" Sep 9 05:44:57.137853 containerd[1603]: time="2025-09-09T05:44:57.137805788Z" level=error msg="ContainerStatus for \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\": not found" Sep 9 05:44:57.138961 kubelet[2838]: E0909 05:44:57.138760 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\": not found" containerID="b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e" Sep 9 05:44:57.140136 kubelet[2838]: I0909 05:44:57.139402 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e"} err="failed to get container status \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0bf2afcedda49e4b3cacdb2c885c17c800041404a59725cc1cd95c25a5ed70e\": not found" Sep 9 05:44:57.141142 kubelet[2838]: I0909 05:44:57.140359 2838 scope.go:117] "RemoveContainer" containerID="7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2" Sep 9 05:44:57.145756 containerd[1603]: time="2025-09-09T05:44:57.145722344Z" level=info msg="RemoveContainer for \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\"" Sep 9 05:44:57.148056 systemd[1]: Removed slice kubepods-burstable-podf77fb038_4f2b_4b28_8204_d4bce19f956c.slice - libcontainer container kubepods-burstable-podf77fb038_4f2b_4b28_8204_d4bce19f956c.slice. Sep 9 05:44:57.148241 systemd[1]: kubepods-burstable-podf77fb038_4f2b_4b28_8204_d4bce19f956c.slice: Consumed 10.353s CPU time, 127.5M memory peak, 120K read from disk, 13.3M written to disk. Sep 9 05:44:57.156869 containerd[1603]: time="2025-09-09T05:44:57.156831553Z" level=info msg="RemoveContainer for \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" returns successfully" Sep 9 05:44:57.157186 kubelet[2838]: I0909 05:44:57.157064 2838 scope.go:117] "RemoveContainer" containerID="384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27" Sep 9 05:44:57.163259 containerd[1603]: time="2025-09-09T05:44:57.163213458Z" level=info msg="RemoveContainer for \"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\"" Sep 9 05:44:57.172895 containerd[1603]: time="2025-09-09T05:44:57.172821850Z" level=info msg="RemoveContainer for \"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\" returns successfully" Sep 9 05:44:57.173368 kubelet[2838]: I0909 05:44:57.173342 2838 scope.go:117] "RemoveContainer" containerID="9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade" Sep 9 05:44:57.177681 containerd[1603]: time="2025-09-09T05:44:57.177631783Z" level=info msg="RemoveContainer for \"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\"" Sep 9 05:44:57.187703 containerd[1603]: time="2025-09-09T05:44:57.187662517Z" level=info msg="RemoveContainer for \"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\" returns successfully" Sep 9 05:44:57.188619 kubelet[2838]: I0909 05:44:57.187947 2838 scope.go:117] "RemoveContainer" containerID="a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7" Sep 9 05:44:57.191151 containerd[1603]: time="2025-09-09T05:44:57.190540723Z" level=info msg="RemoveContainer for \"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\"" Sep 9 05:44:57.194603 containerd[1603]: time="2025-09-09T05:44:57.194556284Z" level=info msg="RemoveContainer for \"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\" returns successfully" Sep 9 05:44:57.194924 kubelet[2838]: I0909 05:44:57.194889 2838 scope.go:117] "RemoveContainer" containerID="ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624" Sep 9 05:44:57.197133 containerd[1603]: time="2025-09-09T05:44:57.196622833Z" level=info msg="RemoveContainer for \"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\"" Sep 9 05:44:57.202230 containerd[1603]: time="2025-09-09T05:44:57.202192474Z" level=info msg="RemoveContainer for \"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\" returns successfully" Sep 9 05:44:57.202541 kubelet[2838]: I0909 05:44:57.202516 2838 scope.go:117] "RemoveContainer" containerID="7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2" Sep 9 05:44:57.202947 containerd[1603]: time="2025-09-09T05:44:57.202903471Z" level=error msg="ContainerStatus for \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\": not found" Sep 9 05:44:57.203143 kubelet[2838]: E0909 05:44:57.203107 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\": not found" containerID="7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2" Sep 9 05:44:57.203236 kubelet[2838]: I0909 05:44:57.203152 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2"} err="failed to get container status \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\": rpc error: code = NotFound desc = an error occurred when try to find container \"7282a96006ff02973bc45116143c63d4976a83e1684728807a744b4a02ce1fa2\": not found" Sep 9 05:44:57.203236 kubelet[2838]: I0909 05:44:57.203181 2838 scope.go:117] "RemoveContainer" containerID="384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27" Sep 9 05:44:57.203446 containerd[1603]: time="2025-09-09T05:44:57.203391379Z" level=error msg="ContainerStatus for \"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\": not found" Sep 9 05:44:57.203629 kubelet[2838]: E0909 05:44:57.203569 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\": not found" containerID="384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27" Sep 9 05:44:57.203759 kubelet[2838]: I0909 05:44:57.203726 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27"} err="failed to get container status \"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\": rpc error: code = NotFound desc = an error occurred when try to find container \"384002a4add3905c33b68c531c4f48e508e6781ed160d912df68f33cd4005f27\": not found" Sep 9 05:44:57.203844 kubelet[2838]: I0909 05:44:57.203761 2838 scope.go:117] "RemoveContainer" containerID="9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade" Sep 9 05:44:57.204065 containerd[1603]: time="2025-09-09T05:44:57.204022504Z" level=error msg="ContainerStatus for \"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\": not found" Sep 9 05:44:57.204265 kubelet[2838]: E0909 05:44:57.204225 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\": not found" containerID="9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade" Sep 9 05:44:57.204365 kubelet[2838]: I0909 05:44:57.204270 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade"} err="failed to get container status \"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b23ee9bde18a7bb9dce4f22a961fbcd70635051602dc0e78e214a1881d00ade\": not found" Sep 9 05:44:57.204365 kubelet[2838]: I0909 05:44:57.204294 2838 scope.go:117] "RemoveContainer" containerID="a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7" Sep 9 05:44:57.204678 containerd[1603]: time="2025-09-09T05:44:57.204539856Z" level=error msg="ContainerStatus for \"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\": not found" Sep 9 05:44:57.204771 kubelet[2838]: E0909 05:44:57.204754 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\": not found" containerID="a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7" Sep 9 05:44:57.204839 kubelet[2838]: I0909 05:44:57.204783 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7"} err="failed to get container status \"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3f8cea94056840b17170fe64502383cae0474dea210926eccb3ea2722e814f7\": not found" Sep 9 05:44:57.204839 kubelet[2838]: I0909 05:44:57.204805 2838 scope.go:117] "RemoveContainer" containerID="ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624" Sep 9 05:44:57.205056 containerd[1603]: time="2025-09-09T05:44:57.204989801Z" level=error msg="ContainerStatus for \"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\": not found" Sep 9 05:44:57.205229 kubelet[2838]: E0909 05:44:57.205148 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\": not found" containerID="ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624" Sep 9 05:44:57.205229 kubelet[2838]: I0909 05:44:57.205218 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624"} err="failed to get container status \"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad9616f389ac2a5be24efeeab3eecaad86c092268508296a0eb9c2f2217cd624\": not found" Sep 9 05:44:57.333622 systemd[1]: var-lib-kubelet-pods-f60740ac\x2d0181\x2d4dbd\x2da5af\x2d01ea43447a12-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddgbvn.mount: Deactivated successfully. Sep 9 05:44:57.333791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb91ab71f3dc26c3a660a11a4ca2b881040bb34b2c2be1397762873e82697024-shm.mount: Deactivated successfully. Sep 9 05:44:57.333913 systemd[1]: var-lib-kubelet-pods-f77fb038\x2d4f2b\x2d4b28\x2d8204\x2dd4bce19f956c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvd57j.mount: Deactivated successfully. Sep 9 05:44:57.334036 systemd[1]: var-lib-kubelet-pods-f77fb038\x2d4f2b\x2d4b28\x2d8204\x2dd4bce19f956c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 05:44:57.334157 systemd[1]: var-lib-kubelet-pods-f77fb038\x2d4f2b\x2d4b28\x2d8204\x2dd4bce19f956c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 05:44:58.200614 sshd[4432]: Connection closed by 139.178.89.65 port 52838 Sep 9 05:44:58.202028 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Sep 9 05:44:58.207324 systemd[1]: sshd@27-10.128.0.68:22-139.178.89.65:52838.service: Deactivated successfully. Sep 9 05:44:58.211104 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 05:44:58.211408 systemd[1]: session-28.scope: Consumed 1.287s CPU time, 23.9M memory peak. Sep 9 05:44:58.213831 systemd-logind[1533]: Session 28 logged out. Waiting for processes to exit. Sep 9 05:44:58.216065 systemd-logind[1533]: Removed session 28. Sep 9 05:44:58.257917 systemd[1]: Started sshd@28-10.128.0.68:22-139.178.89.65:52852.service - OpenSSH per-connection server daemon (139.178.89.65:52852). Sep 9 05:44:58.365609 ntpd[1517]: Deleting interface #12 lxc_health, fe80::b808:e2ff:fee8:7df2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=125 secs Sep 9 05:44:58.366047 ntpd[1517]: 9 Sep 05:44:58 ntpd[1517]: Deleting interface #12 lxc_health, fe80::b808:e2ff:fee8:7df2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=125 secs Sep 9 05:44:58.565584 sshd[4581]: Accepted publickey for core from 139.178.89.65 port 52852 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:44:58.567310 sshd-session[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:44:58.575260 kubelet[2838]: I0909 05:44:58.575200 2838 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60740ac-0181-4dbd-a5af-01ea43447a12" path="/var/lib/kubelet/pods/f60740ac-0181-4dbd-a5af-01ea43447a12/volumes" Sep 9 05:44:58.576035 kubelet[2838]: I0909 05:44:58.575985 2838 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77fb038-4f2b-4b28-8204-d4bce19f956c" path="/var/lib/kubelet/pods/f77fb038-4f2b-4b28-8204-d4bce19f956c/volumes" Sep 9 05:44:58.579423 systemd-logind[1533]: New session 29 of user core. Sep 9 05:44:58.589859 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 05:44:59.734845 kubelet[2838]: E0909 05:44:59.734763 2838 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:45:00.332617 kubelet[2838]: E0909 05:45:00.332545 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f77fb038-4f2b-4b28-8204-d4bce19f956c" containerName="apply-sysctl-overwrites" Sep 9 05:45:00.332617 kubelet[2838]: E0909 05:45:00.332617 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f77fb038-4f2b-4b28-8204-d4bce19f956c" containerName="clean-cilium-state" Sep 9 05:45:00.332617 kubelet[2838]: E0909 05:45:00.332631 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f77fb038-4f2b-4b28-8204-d4bce19f956c" containerName="mount-cgroup" Sep 9 05:45:00.332915 kubelet[2838]: E0909 05:45:00.332645 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f77fb038-4f2b-4b28-8204-d4bce19f956c" containerName="mount-bpf-fs" Sep 9 05:45:00.332915 kubelet[2838]: E0909 05:45:00.332655 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f60740ac-0181-4dbd-a5af-01ea43447a12" containerName="cilium-operator" Sep 9 05:45:00.332915 kubelet[2838]: E0909 05:45:00.332665 2838 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f77fb038-4f2b-4b28-8204-d4bce19f956c" containerName="cilium-agent" Sep 9 05:45:00.332915 kubelet[2838]: I0909 05:45:00.332706 2838 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60740ac-0181-4dbd-a5af-01ea43447a12" containerName="cilium-operator" Sep 9 05:45:00.332915 kubelet[2838]: I0909 05:45:00.332717 2838 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77fb038-4f2b-4b28-8204-d4bce19f956c" containerName="cilium-agent" Sep 9 05:45:00.342699 sshd[4584]: Connection closed by 139.178.89.65 port 52852 Sep 9 05:45:00.341511 sshd-session[4581]: pam_unix(sshd:session): session closed for user core Sep 9 05:45:00.351367 systemd[1]: Created slice kubepods-burstable-pod4d3ae90d_25a1_4008_a1dd_017d53f00d3f.slice - libcontainer container kubepods-burstable-pod4d3ae90d_25a1_4008_a1dd_017d53f00d3f.slice. Sep 9 05:45:00.360370 systemd[1]: sshd@28-10.128.0.68:22-139.178.89.65:52852.service: Deactivated successfully. Sep 9 05:45:00.366041 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 05:45:00.366726 systemd[1]: session-29.scope: Consumed 1.539s CPU time, 23.8M memory peak. Sep 9 05:45:00.374911 systemd-logind[1533]: Session 29 logged out. Waiting for processes to exit. Sep 9 05:45:00.377798 systemd-logind[1533]: Removed session 29. Sep 9 05:45:00.402957 systemd[1]: Started sshd@29-10.128.0.68:22-139.178.89.65:60336.service - OpenSSH per-connection server daemon (139.178.89.65:60336). Sep 9 05:45:00.462631 kubelet[2838]: I0909 05:45:00.462546 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-lib-modules\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463338 kubelet[2838]: I0909 05:45:00.462862 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-xtables-lock\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463338 kubelet[2838]: I0909 05:45:00.462907 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-cilium-run\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463338 kubelet[2838]: I0909 05:45:00.462935 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-bpf-maps\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463338 kubelet[2838]: I0909 05:45:00.462971 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-host-proc-sys-net\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463338 kubelet[2838]: I0909 05:45:00.463004 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-cilium-ipsec-secrets\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463338 kubelet[2838]: I0909 05:45:00.463034 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-cilium-cgroup\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463640 kubelet[2838]: I0909 05:45:00.463063 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-clustermesh-secrets\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463640 kubelet[2838]: I0909 05:45:00.463090 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-cni-path\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463640 kubelet[2838]: I0909 05:45:00.463119 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-host-proc-sys-kernel\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463640 kubelet[2838]: I0909 05:45:00.463159 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s6kc\" (UniqueName: \"kubernetes.io/projected/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-kube-api-access-8s6kc\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463640 kubelet[2838]: I0909 05:45:00.463194 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-etc-cni-netd\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463640 kubelet[2838]: I0909 05:45:00.463221 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-hubble-tls\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463837 kubelet[2838]: I0909 05:45:00.463251 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-cilium-config-path\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.463837 kubelet[2838]: I0909 05:45:00.463283 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d3ae90d-25a1-4008-a1dd-017d53f00d3f-hostproc\") pod \"cilium-7h54t\" (UID: \"4d3ae90d-25a1-4008-a1dd-017d53f00d3f\") " pod="kube-system/cilium-7h54t" Sep 9 05:45:00.672696 containerd[1603]: time="2025-09-09T05:45:00.672009120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7h54t,Uid:4d3ae90d-25a1-4008-a1dd-017d53f00d3f,Namespace:kube-system,Attempt:0,}" Sep 9 05:45:00.696328 containerd[1603]: time="2025-09-09T05:45:00.696271432Z" level=info msg="connecting to shim d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec" address="unix:///run/containerd/s/18f0d634ec4b935ea2b7ad4f0872e496b484735b58f9d1015260751bc4c2b1d6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:45:00.721401 sshd[4594]: Accepted publickey for core from 139.178.89.65 port 60336 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:45:00.723047 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:45:00.732064 systemd-logind[1533]: New session 30 of user core. Sep 9 05:45:00.747819 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 05:45:00.763842 systemd[1]: Started cri-containerd-d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec.scope - libcontainer container d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec. Sep 9 05:45:00.814359 containerd[1603]: time="2025-09-09T05:45:00.814169175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7h54t,Uid:4d3ae90d-25a1-4008-a1dd-017d53f00d3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec\"" Sep 9 05:45:00.818560 containerd[1603]: time="2025-09-09T05:45:00.818516409Z" level=info msg="CreateContainer within sandbox \"d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:45:00.826850 containerd[1603]: time="2025-09-09T05:45:00.826787169Z" level=info msg="Container 2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:45:00.834748 containerd[1603]: time="2025-09-09T05:45:00.834695789Z" level=info msg="CreateContainer within sandbox \"d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a\"" Sep 9 05:45:00.835476 containerd[1603]: time="2025-09-09T05:45:00.835434921Z" level=info msg="StartContainer for \"2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a\"" Sep 9 05:45:00.836853 containerd[1603]: time="2025-09-09T05:45:00.836587258Z" level=info msg="connecting to shim 2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a" address="unix:///run/containerd/s/18f0d634ec4b935ea2b7ad4f0872e496b484735b58f9d1015260751bc4c2b1d6" protocol=ttrpc version=3 Sep 9 05:45:00.863914 systemd[1]: Started cri-containerd-2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a.scope - libcontainer container 2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a. Sep 9 05:45:00.914728 containerd[1603]: time="2025-09-09T05:45:00.914672930Z" level=info msg="StartContainer for \"2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a\" returns successfully" Sep 9 05:45:00.925218 systemd[1]: cri-containerd-2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a.scope: Deactivated successfully. Sep 9 05:45:00.929150 containerd[1603]: time="2025-09-09T05:45:00.929080026Z" level=info msg="received exit event container_id:\"2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a\" id:\"2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a\" pid:4661 exited_at:{seconds:1757396700 nanos:928777730}" Sep 9 05:45:00.929959 containerd[1603]: time="2025-09-09T05:45:00.929904733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a\" id:\"2dd5b909d90c8c4706f87bf4b69b27c5c35e2d6b8325e048e10ca45bc220df7a\" pid:4661 exited_at:{seconds:1757396700 nanos:928777730}" Sep 9 05:45:00.946619 sshd[4632]: Connection closed by 139.178.89.65 port 60336 Sep 9 05:45:00.944404 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Sep 9 05:45:00.953538 systemd[1]: sshd@29-10.128.0.68:22-139.178.89.65:60336.service: Deactivated successfully. Sep 9 05:45:00.958515 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 05:45:00.961859 systemd-logind[1533]: Session 30 logged out. Waiting for processes to exit. Sep 9 05:45:00.965391 systemd-logind[1533]: Removed session 30. Sep 9 05:45:00.999055 systemd[1]: Started sshd@30-10.128.0.68:22-139.178.89.65:60338.service - OpenSSH per-connection server daemon (139.178.89.65:60338). Sep 9 05:45:01.155892 containerd[1603]: time="2025-09-09T05:45:01.155776297Z" level=info msg="CreateContainer within sandbox \"d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:45:01.169207 containerd[1603]: time="2025-09-09T05:45:01.168850001Z" level=info msg="Container d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:45:01.176658 containerd[1603]: time="2025-09-09T05:45:01.176496926Z" level=info msg="CreateContainer within sandbox \"d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca\"" Sep 9 05:45:01.178856 containerd[1603]: time="2025-09-09T05:45:01.178798917Z" level=info msg="StartContainer for \"d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca\"" Sep 9 05:45:01.181401 containerd[1603]: time="2025-09-09T05:45:01.181022873Z" level=info msg="connecting to shim d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca" address="unix:///run/containerd/s/18f0d634ec4b935ea2b7ad4f0872e496b484735b58f9d1015260751bc4c2b1d6" protocol=ttrpc version=3 Sep 9 05:45:01.215848 systemd[1]: Started cri-containerd-d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca.scope - libcontainer container d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca. Sep 9 05:45:01.264197 containerd[1603]: time="2025-09-09T05:45:01.264144615Z" level=info msg="StartContainer for \"d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca\" returns successfully" Sep 9 05:45:01.273978 systemd[1]: cri-containerd-d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca.scope: Deactivated successfully. Sep 9 05:45:01.277094 containerd[1603]: time="2025-09-09T05:45:01.276903860Z" level=info msg="received exit event container_id:\"d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca\" id:\"d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca\" pid:4714 exited_at:{seconds:1757396701 nanos:276129013}" Sep 9 05:45:01.277412 containerd[1603]: time="2025-09-09T05:45:01.277380423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca\" id:\"d164dce256dfe70a404cdac36b0de6424dc7fb96d19fd8b43255167b67533eca\" pid:4714 exited_at:{seconds:1757396701 nanos:276129013}" Sep 9 05:45:01.313895 sshd[4699]: Accepted publickey for core from 139.178.89.65 port 60338 ssh2: RSA SHA256:QSDpUihtIai1/X8svdSqOld/LKc/E5lpY4TpkeXfmcw Sep 9 05:45:01.315994 sshd-session[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:45:01.324398 systemd-logind[1533]: New session 31 of user core. Sep 9 05:45:01.328861 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 9 05:45:02.160485 containerd[1603]: time="2025-09-09T05:45:02.160412946Z" level=info msg="CreateContainer within sandbox \"d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:45:02.178960 containerd[1603]: time="2025-09-09T05:45:02.178896765Z" level=info msg="Container 03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:45:02.189775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929437183.mount: Deactivated successfully. Sep 9 05:45:02.201430 containerd[1603]: time="2025-09-09T05:45:02.200311015Z" level=info msg="CreateContainer within sandbox \"d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec\"" Sep 9 05:45:02.203010 containerd[1603]: time="2025-09-09T05:45:02.202973628Z" level=info msg="StartContainer for \"03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec\"" Sep 9 05:45:02.206184 containerd[1603]: time="2025-09-09T05:45:02.206136431Z" level=info msg="connecting to shim 03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec" address="unix:///run/containerd/s/18f0d634ec4b935ea2b7ad4f0872e496b484735b58f9d1015260751bc4c2b1d6" protocol=ttrpc version=3 Sep 9 05:45:02.242911 systemd[1]: Started cri-containerd-03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec.scope - libcontainer container 03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec. Sep 9 05:45:02.314002 containerd[1603]: time="2025-09-09T05:45:02.313931623Z" level=info msg="StartContainer for \"03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec\" returns successfully" Sep 9 05:45:02.317420 systemd[1]: cri-containerd-03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec.scope: Deactivated successfully. Sep 9 05:45:02.323075 containerd[1603]: time="2025-09-09T05:45:02.322479162Z" level=info msg="received exit event container_id:\"03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec\" id:\"03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec\" pid:4765 exited_at:{seconds:1757396702 nanos:321942915}" Sep 9 05:45:02.323998 containerd[1603]: time="2025-09-09T05:45:02.323946912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec\" id:\"03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec\" pid:4765 exited_at:{seconds:1757396702 nanos:321942915}" Sep 9 05:45:02.363169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03c18f1a0e82e496f16f5e96eee323e6db8d2abd13d988932f9edba1220bd7ec-rootfs.mount: Deactivated successfully. Sep 9 05:45:03.167628 containerd[1603]: time="2025-09-09T05:45:03.166246962Z" level=info msg="CreateContainer within sandbox \"d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:45:03.183123 containerd[1603]: time="2025-09-09T05:45:03.181655564Z" level=info msg="Container 2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:45:03.196586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494265241.mount: Deactivated successfully. Sep 9 05:45:03.203089 containerd[1603]: time="2025-09-09T05:45:03.203019425Z" level=info msg="CreateContainer within sandbox \"d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31\"" Sep 9 05:45:03.207607 containerd[1603]: time="2025-09-09T05:45:03.207542737Z" level=info msg="StartContainer for \"2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31\"" Sep 9 05:45:03.209053 containerd[1603]: time="2025-09-09T05:45:03.208961356Z" level=info msg="connecting to shim 2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31" address="unix:///run/containerd/s/18f0d634ec4b935ea2b7ad4f0872e496b484735b58f9d1015260751bc4c2b1d6" protocol=ttrpc version=3 Sep 9 05:45:03.250859 systemd[1]: Started cri-containerd-2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31.scope - libcontainer container 2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31. Sep 9 05:45:03.306356 systemd[1]: cri-containerd-2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31.scope: Deactivated successfully. Sep 9 05:45:03.310126 containerd[1603]: time="2025-09-09T05:45:03.310069041Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31\" id:\"2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31\" pid:4805 exited_at:{seconds:1757396703 nanos:309128296}" Sep 9 05:45:03.310884 containerd[1603]: time="2025-09-09T05:45:03.310807040Z" level=info msg="received exit event container_id:\"2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31\" id:\"2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31\" pid:4805 exited_at:{seconds:1757396703 nanos:309128296}" Sep 9 05:45:03.324483 containerd[1603]: time="2025-09-09T05:45:03.324436742Z" level=info msg="StartContainer for \"2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31\" returns successfully" Sep 9 05:45:03.357070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2af4f98865fe9ee53a73643d5835e9713fcc1aeae9d6ed08e3412b9fc089fd31-rootfs.mount: Deactivated successfully. Sep 9 05:45:04.177978 containerd[1603]: time="2025-09-09T05:45:04.177916265Z" level=info msg="CreateContainer within sandbox \"d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:45:04.196486 containerd[1603]: time="2025-09-09T05:45:04.196422956Z" level=info msg="Container ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:45:04.218496 containerd[1603]: time="2025-09-09T05:45:04.218434649Z" level=info msg="CreateContainer within sandbox \"d7a11e8fbb446e12b55698f44642b33a10553e2967cdf75ea7db47200b1657ec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2\"" Sep 9 05:45:04.222721 containerd[1603]: time="2025-09-09T05:45:04.221510923Z" level=info msg="StartContainer for \"ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2\"" Sep 9 05:45:04.223258 containerd[1603]: time="2025-09-09T05:45:04.223212290Z" level=info msg="connecting to shim ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2" address="unix:///run/containerd/s/18f0d634ec4b935ea2b7ad4f0872e496b484735b58f9d1015260751bc4c2b1d6" protocol=ttrpc version=3 Sep 9 05:45:04.267974 systemd[1]: Started cri-containerd-ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2.scope - libcontainer container ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2. Sep 9 05:45:04.327923 containerd[1603]: time="2025-09-09T05:45:04.327857759Z" level=info msg="StartContainer for \"ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2\" returns successfully" Sep 9 05:45:04.451471 containerd[1603]: time="2025-09-09T05:45:04.451263023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2\" id:\"6c1acc7dc29c6527db350a180e0ac7c15deaff3f67800148fbf480018e958021\" pid:4871 exited_at:{seconds:1757396704 nanos:449547870}" Sep 9 05:45:04.897648 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 05:45:05.203352 kubelet[2838]: I0909 05:45:05.202717 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7h54t" podStartSLOduration=5.202684418 podStartE2EDuration="5.202684418s" podCreationTimestamp="2025-09-09 05:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:45:05.202344375 +0000 UTC m=+160.840334088" watchObservedRunningTime="2025-09-09 05:45:05.202684418 +0000 UTC m=+160.840674133" Sep 9 05:45:05.764239 containerd[1603]: time="2025-09-09T05:45:05.764135208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2\" id:\"a02094c53fbd70369e2d6f9c7ba4a6f04f2ff9fdf422a60b5a54940e36532ddb\" pid:4947 exit_status:1 exited_at:{seconds:1757396705 nanos:763249567}" Sep 9 05:45:07.942417 containerd[1603]: time="2025-09-09T05:45:07.942366552Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2\" id:\"fa527c2813525f6e1eb7b9db073939fbee4bb2589f8022bd6aa066213d0e6923\" pid:5328 exit_status:1 exited_at:{seconds:1757396707 nanos:941835572}" Sep 9 05:45:08.099242 systemd-networkd[1457]: lxc_health: Link UP Sep 9 05:45:08.101271 systemd-networkd[1457]: lxc_health: Gained carrier Sep 9 05:45:09.987818 systemd-networkd[1457]: lxc_health: Gained IPv6LL Sep 9 05:45:10.223640 containerd[1603]: time="2025-09-09T05:45:10.222267547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2\" id:\"1f7c2a3381e89b6ca1eaf030f0850e658b0743b26666ce4a54cd0864c02ff0eb\" pid:5418 exited_at:{seconds:1757396710 nanos:221212268}" Sep 9 05:45:12.365630 ntpd[1517]: Listen normally on 15 lxc_health [fe80::90f1:42ff:fe10:15cc%14]:123 Sep 9 05:45:12.366240 ntpd[1517]: 9 Sep 05:45:12 ntpd[1517]: Listen normally on 15 lxc_health [fe80::90f1:42ff:fe10:15cc%14]:123 Sep 9 05:45:12.492241 containerd[1603]: time="2025-09-09T05:45:12.492175130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2\" id:\"b5f1f065ee448d08ce162916f40a8dc8207a0c95ce9541b57f0434de6d7d390b\" pid:5449 exited_at:{seconds:1757396712 nanos:490507383}" Sep 9 05:45:14.655741 containerd[1603]: time="2025-09-09T05:45:14.655658411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba15714eaba9487dbe3f676c702587dee69ef020d218320acc5ed27ac3878ee2\" id:\"76d80a83aa52ec24f9c366d96333cd31e2b2154c90faa94dbdf111284f94df73\" pid:5478 exited_at:{seconds:1757396714 nanos:655071982}" Sep 9 05:45:14.868308 sshd[4747]: Connection closed by 139.178.89.65 port 60338 Sep 9 05:45:14.869563 sshd-session[4699]: pam_unix(sshd:session): session closed for user core Sep 9 05:45:14.877994 systemd[1]: sshd@30-10.128.0.68:22-139.178.89.65:60338.service: Deactivated successfully. Sep 9 05:45:14.881239 systemd[1]: session-31.scope: Deactivated successfully. Sep 9 05:45:14.883162 systemd-logind[1533]: Session 31 logged out. Waiting for processes to exit. Sep 9 05:45:14.886219 systemd-logind[1533]: Removed session 31.