Sep 16 04:51:51.621446 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 04:51:51.621502 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:51:51.621521 kernel: BIOS-provided physical RAM map: Sep 16 04:51:51.621535 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 16 04:51:51.621547 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 16 04:51:51.621560 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 16 04:51:51.621587 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 16 04:51:51.621601 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 16 04:51:51.621614 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd329fff] usable Sep 16 04:51:51.621628 kernel: BIOS-e820: [mem 0x00000000bd32a000-0x00000000bd331fff] ACPI data Sep 16 04:51:51.621642 kernel: BIOS-e820: [mem 0x00000000bd332000-0x00000000bf8ecfff] usable Sep 16 04:51:51.621657 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 16 04:51:51.621671 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 16 04:51:51.621871 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 16 04:51:51.621898 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 16 04:51:51.621915 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 16 04:51:51.621931 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 16 04:51:51.621947 kernel: NX (Execute Disable) protection: active Sep 16 04:51:51.621964 kernel: APIC: Static calls initialized Sep 16 04:51:51.621980 kernel: efi: EFI v2.7 by EDK II Sep 16 04:51:51.621995 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32a018 Sep 16 04:51:51.622011 kernel: random: crng init done Sep 16 04:51:51.622030 kernel: secureboot: Secure boot disabled Sep 16 04:51:51.622045 kernel: SMBIOS 2.4 present. Sep 16 04:51:51.622061 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 16 04:51:51.622076 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:51:51.622091 kernel: Hypervisor detected: KVM Sep 16 04:51:51.622106 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 04:51:51.622121 kernel: kvm-clock: using sched offset of 14813600420 cycles Sep 16 04:51:51.622138 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 04:51:51.622154 kernel: tsc: Detected 2299.998 MHz processor Sep 16 04:51:51.622177 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 04:51:51.622197 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 04:51:51.622213 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 16 04:51:51.622228 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 16 04:51:51.622244 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 04:51:51.622260 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 16 04:51:51.622275 kernel: Using GB pages for direct mapping Sep 16 04:51:51.622290 kernel: ACPI: Early table checksum verification disabled Sep 16 04:51:51.622307 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 16 04:51:51.622332 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 16 04:51:51.622349 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 16 04:51:51.622365 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 16 04:51:51.622394 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 16 04:51:51.622411 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 16 04:51:51.622425 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 16 04:51:51.622444 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 16 04:51:51.622459 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 16 04:51:51.622473 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 16 04:51:51.622490 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 16 04:51:51.622506 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 16 04:51:51.622521 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 16 04:51:51.622539 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 16 04:51:51.622556 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 16 04:51:51.622572 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 16 04:51:51.622591 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 16 04:51:51.622606 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 16 04:51:51.622620 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 16 04:51:51.622635 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 16 04:51:51.622651 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 16 04:51:51.622666 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 16 04:51:51.622682 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 16 04:51:51.622697 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Sep 16 04:51:51.622712 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Sep 16 04:51:51.622731 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Sep 16 04:51:51.622747 kernel: Zone ranges: Sep 16 04:51:51.622762 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 04:51:51.622778 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 16 04:51:51.622794 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 16 04:51:51.622809 kernel: Device empty Sep 16 04:51:51.622825 kernel: Movable zone start for each node Sep 16 04:51:51.622840 kernel: Early memory node ranges Sep 16 04:51:51.622854 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 16 04:51:51.622874 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 16 04:51:51.622890 kernel: node 0: [mem 0x0000000000100000-0x00000000bd329fff] Sep 16 04:51:51.622906 kernel: node 0: [mem 0x00000000bd332000-0x00000000bf8ecfff] Sep 16 04:51:51.622923 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 16 04:51:51.622939 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 16 04:51:51.622956 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 16 04:51:51.622972 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:51:51.622989 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 16 04:51:51.623006 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 16 04:51:51.623022 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Sep 16 04:51:51.623043 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 16 04:51:51.623059 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 16 04:51:51.623576 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 16 04:51:51.623596 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 04:51:51.623613 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 16 04:51:51.623631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 04:51:51.623648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 04:51:51.623666 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 04:51:51.623683 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 04:51:51.623707 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 04:51:51.623725 kernel: CPU topo: Max. logical packages: 1 Sep 16 04:51:51.623748 kernel: CPU topo: Max. logical dies: 1 Sep 16 04:51:51.623765 kernel: CPU topo: Max. dies per package: 1 Sep 16 04:51:51.623783 kernel: CPU topo: Max. threads per core: 2 Sep 16 04:51:51.623800 kernel: CPU topo: Num. cores per package: 1 Sep 16 04:51:51.623817 kernel: CPU topo: Num. threads per package: 2 Sep 16 04:51:51.623838 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 16 04:51:51.623855 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 16 04:51:51.623877 kernel: Booting paravirtualized kernel on KVM Sep 16 04:51:51.623894 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 04:51:51.623911 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 16 04:51:51.623927 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 16 04:51:51.623944 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 16 04:51:51.623961 kernel: pcpu-alloc: [0] 0 1 Sep 16 04:51:51.623977 kernel: kvm-guest: PV spinlocks enabled Sep 16 04:51:51.623995 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 04:51:51.624014 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:51:51.624036 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:51:51.624052 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 16 04:51:51.624070 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:51:51.624088 kernel: Fallback order for Node 0: 0 Sep 16 04:51:51.624105 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Sep 16 04:51:51.624122 kernel: Policy zone: Normal Sep 16 04:51:51.624139 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:51:51.624155 kernel: software IO TLB: area num 2. Sep 16 04:51:51.624200 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 04:51:51.624218 kernel: Kernel/User page tables isolation: enabled Sep 16 04:51:51.624237 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 04:51:51.624259 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 04:51:51.624277 kernel: Dynamic Preempt: voluntary Sep 16 04:51:51.624295 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:51:51.624315 kernel: rcu: RCU event tracing is enabled. Sep 16 04:51:51.624334 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 04:51:51.624356 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:51:51.624373 kernel: Rude variant of Tasks RCU enabled. Sep 16 04:51:51.624418 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:51:51.624442 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:51:51.625083 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 04:51:51.625109 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:51:51.625129 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:51:51.625148 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:51:51.625185 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 16 04:51:51.625204 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:51:51.625222 kernel: Console: colour dummy device 80x25 Sep 16 04:51:51.625241 kernel: printk: legacy console [ttyS0] enabled Sep 16 04:51:51.625260 kernel: ACPI: Core revision 20240827 Sep 16 04:51:51.625277 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 04:51:51.625297 kernel: x2apic enabled Sep 16 04:51:51.625315 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 04:51:51.625334 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 16 04:51:51.625353 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 16 04:51:51.625376 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 16 04:51:51.625416 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 16 04:51:51.625436 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 16 04:51:51.625455 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 04:51:51.625474 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 16 04:51:51.625493 kernel: Spectre V2 : Mitigation: IBRS Sep 16 04:51:51.625512 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 04:51:51.625530 kernel: RETBleed: Mitigation: IBRS Sep 16 04:51:51.625554 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 16 04:51:51.625572 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 16 04:51:51.625591 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 16 04:51:51.625610 kernel: MDS: Mitigation: Clear CPU buffers Sep 16 04:51:51.625628 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 16 04:51:51.625647 kernel: active return thunk: its_return_thunk Sep 16 04:51:51.625665 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 16 04:51:51.625684 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 04:51:51.625703 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 04:51:51.625725 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 04:51:51.625743 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 04:51:51.625762 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 16 04:51:51.625781 kernel: Freeing SMP alternatives memory: 32K Sep 16 04:51:51.625800 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:51:51.625818 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:51:51.625836 kernel: landlock: Up and running. Sep 16 04:51:51.625855 kernel: SELinux: Initializing. Sep 16 04:51:51.625874 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 04:51:51.625896 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 04:51:51.625915 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 16 04:51:51.625934 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 16 04:51:51.625952 kernel: signal: max sigframe size: 1776 Sep 16 04:51:51.625971 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:51:51.625990 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:51:51.626009 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:51:51.626028 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 16 04:51:51.626046 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:51:51.626069 kernel: smpboot: x86: Booting SMP configuration: Sep 16 04:51:51.626087 kernel: .... node #0, CPUs: #1 Sep 16 04:51:51.626106 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 16 04:51:51.626126 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 16 04:51:51.626145 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 04:51:51.626171 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 16 04:51:51.626190 kernel: Memory: 7564024K/7860552K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 290704K reserved, 0K cma-reserved) Sep 16 04:51:51.626209 kernel: devtmpfs: initialized Sep 16 04:51:51.626232 kernel: x86/mm: Memory block size: 128MB Sep 16 04:51:51.626251 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 16 04:51:51.626269 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:51:51.626289 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 04:51:51.626308 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:51:51.626326 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:51:51.626345 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:51:51.626364 kernel: audit: type=2000 audit(1757998307.377:1): state=initialized audit_enabled=0 res=1 Sep 16 04:51:51.626429 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:51:51.626450 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 04:51:51.626467 kernel: cpuidle: using governor menu Sep 16 04:51:51.626484 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:51:51.626503 kernel: dca service started, version 1.12.1 Sep 16 04:51:51.626521 kernel: PCI: Using configuration type 1 for base access Sep 16 04:51:51.626540 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 04:51:51.626557 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:51:51.626577 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:51:51.626596 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:51:51.626617 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:51:51.626634 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:51:51.626651 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:51:51.626668 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:51:51.626685 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 16 04:51:51.626701 kernel: ACPI: Interpreter enabled Sep 16 04:51:51.626721 kernel: ACPI: PM: (supports S0 S3 S5) Sep 16 04:51:51.626742 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 04:51:51.626760 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 04:51:51.626781 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 16 04:51:51.626797 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 16 04:51:51.626813 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:51:51.627067 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:51:51.627271 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 16 04:51:51.637547 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 16 04:51:51.637591 kernel: PCI host bridge to bus 0000:00 Sep 16 04:51:51.637797 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 04:51:51.637981 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 04:51:51.638149 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 04:51:51.638332 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 16 04:51:51.638529 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:51:51.638762 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:51:51.638968 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 16 04:51:51.639187 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 16 04:51:51.639369 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 16 04:51:51.639601 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Sep 16 04:51:51.639787 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Sep 16 04:51:51.639973 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Sep 16 04:51:51.640178 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 16 04:51:51.640372 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Sep 16 04:51:51.640618 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Sep 16 04:51:51.640814 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 16 04:51:51.641000 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Sep 16 04:51:51.641192 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Sep 16 04:51:51.641216 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 04:51:51.641237 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 04:51:51.641261 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 04:51:51.641280 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 04:51:51.641299 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 16 04:51:51.641318 kernel: iommu: Default domain type: Translated Sep 16 04:51:51.641337 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 04:51:51.641356 kernel: efivars: Registered efivars operations Sep 16 04:51:51.641376 kernel: PCI: Using ACPI for IRQ routing Sep 16 04:51:51.641418 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 04:51:51.641435 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 16 04:51:51.641456 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 16 04:51:51.641472 kernel: e820: reserve RAM buffer [mem 0xbd32a000-0xbfffffff] Sep 16 04:51:51.641488 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 16 04:51:51.641504 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 16 04:51:51.641521 kernel: vgaarb: loaded Sep 16 04:51:51.641537 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 04:51:51.641554 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:51:51.641571 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:51:51.641588 kernel: pnp: PnP ACPI init Sep 16 04:51:51.641608 kernel: pnp: PnP ACPI: found 7 devices Sep 16 04:51:51.641626 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 04:51:51.641643 kernel: NET: Registered PF_INET protocol family Sep 16 04:51:51.641660 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 16 04:51:51.641678 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 16 04:51:51.641696 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:51:51.641713 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:51:51.641731 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 16 04:51:51.641749 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 16 04:51:51.641770 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 16 04:51:51.641788 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 16 04:51:51.641805 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:51:51.641823 kernel: NET: Registered PF_XDP protocol family Sep 16 04:51:51.642011 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 04:51:51.642188 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 04:51:51.642352 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 04:51:51.642552 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 16 04:51:51.642756 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 16 04:51:51.642781 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:51:51.642799 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 16 04:51:51.642818 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 16 04:51:51.642835 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 16 04:51:51.642853 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 16 04:51:51.642871 kernel: clocksource: Switched to clocksource tsc Sep 16 04:51:51.642888 kernel: Initialise system trusted keyrings Sep 16 04:51:51.642911 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 16 04:51:51.642928 kernel: Key type asymmetric registered Sep 16 04:51:51.642946 kernel: Asymmetric key parser 'x509' registered Sep 16 04:51:51.642964 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 04:51:51.642982 kernel: io scheduler mq-deadline registered Sep 16 04:51:51.643000 kernel: io scheduler kyber registered Sep 16 04:51:51.643018 kernel: io scheduler bfq registered Sep 16 04:51:51.643037 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 04:51:51.643056 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 16 04:51:51.643270 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 16 04:51:51.643294 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 16 04:51:51.643503 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 16 04:51:51.643528 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 16 04:51:51.643713 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 16 04:51:51.643736 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:51:51.643756 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:51:51.643776 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 16 04:51:51.643794 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 16 04:51:51.643818 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 16 04:51:51.644005 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 16 04:51:51.644030 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 04:51:51.644048 kernel: i8042: Warning: Keylock active Sep 16 04:51:51.644066 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 04:51:51.644086 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 04:51:51.644286 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 16 04:51:51.644741 kernel: rtc_cmos 00:00: registered as rtc0 Sep 16 04:51:51.644921 kernel: rtc_cmos 00:00: setting system clock to 2025-09-16T04:51:50 UTC (1757998310) Sep 16 04:51:51.645094 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 16 04:51:51.645117 kernel: intel_pstate: CPU model not supported Sep 16 04:51:51.645137 kernel: pstore: Using crash dump compression: deflate Sep 16 04:51:51.645156 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 04:51:51.645187 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:51:51.645206 kernel: Segment Routing with IPv6 Sep 16 04:51:51.645225 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:51:51.645250 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:51:51.645269 kernel: Key type dns_resolver registered Sep 16 04:51:51.645288 kernel: IPI shorthand broadcast: enabled Sep 16 04:51:51.645307 kernel: sched_clock: Marking stable (3436004569, 147507575)->(3624196786, -40684642) Sep 16 04:51:51.645326 kernel: registered taskstats version 1 Sep 16 04:51:51.645345 kernel: Loading compiled-in X.509 certificates Sep 16 04:51:51.645364 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 04:51:51.645398 kernel: Demotion targets for Node 0: null Sep 16 04:51:51.645414 kernel: Key type .fscrypt registered Sep 16 04:51:51.645437 kernel: Key type fscrypt-provisioning registered Sep 16 04:51:51.645455 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:51:51.645473 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 04:51:51.645492 kernel: ima: No architecture policies found Sep 16 04:51:51.645508 kernel: clk: Disabling unused clocks Sep 16 04:51:51.645527 kernel: Warning: unable to open an initial console. Sep 16 04:51:51.645546 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 04:51:51.645565 kernel: Write protecting the kernel read-only data: 24576k Sep 16 04:51:51.645588 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 04:51:51.645607 kernel: Run /init as init process Sep 16 04:51:51.645626 kernel: with arguments: Sep 16 04:51:51.645644 kernel: /init Sep 16 04:51:51.645662 kernel: with environment: Sep 16 04:51:51.645681 kernel: HOME=/ Sep 16 04:51:51.645700 kernel: TERM=linux Sep 16 04:51:51.645718 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:51:51.645739 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:51:51.645767 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:51:51.645788 systemd[1]: Detected virtualization google. Sep 16 04:51:51.645808 systemd[1]: Detected architecture x86-64. Sep 16 04:51:51.645827 systemd[1]: Running in initrd. Sep 16 04:51:51.645847 systemd[1]: No hostname configured, using default hostname. Sep 16 04:51:51.645867 systemd[1]: Hostname set to . Sep 16 04:51:51.645887 systemd[1]: Initializing machine ID from random generator. Sep 16 04:51:51.645911 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:51:51.646091 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:51:51.646116 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:51:51.646137 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:51:51.646166 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:51:51.646187 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:51:51.646213 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:51:51.646236 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:51:51.646257 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:51:51.646279 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:51:51.646300 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:51:51.646321 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:51:51.646342 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:51:51.646366 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:51:51.646410 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:51:51.646431 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:51:51.646453 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:51:51.646474 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:51:51.646495 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:51:51.646515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:51:51.646537 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:51:51.646562 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:51:51.646584 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:51:51.646605 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:51:51.646629 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:51:51.646649 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:51:51.646671 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:51:51.646692 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:51:51.646713 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:51:51.646735 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:51:51.646759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:51:51.646823 systemd-journald[207]: Collecting audit messages is disabled. Sep 16 04:51:51.646869 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:51:51.646896 systemd-journald[207]: Journal started Sep 16 04:51:51.646938 systemd-journald[207]: Runtime Journal (/run/log/journal/ba09206a6c3b4fa999326587f72bbfd4) is 8M, max 148.9M, 140.9M free. Sep 16 04:51:51.619611 systemd-modules-load[208]: Inserted module 'overlay' Sep 16 04:51:51.664707 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:51:51.667193 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:51:51.667898 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:51:51.672567 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:51:51.673772 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:51:51.690455 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:51:51.702848 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 16 04:51:51.703435 kernel: Bridge firewalling registered Sep 16 04:51:51.705925 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:51:51.710835 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:51:51.786312 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:51:51.808957 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:51:51.818009 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:51:51.841806 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:51:51.859798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:51:51.888711 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:51:51.916643 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:51:51.922531 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:51:51.939633 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:51:51.947852 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:51:51.961561 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:51:51.982345 systemd-resolved[234]: Positive Trust Anchors: Sep 16 04:51:51.982372 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:51:51.982463 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:51:52.078441 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:51:52.126552 kernel: SCSI subsystem initialized Sep 16 04:51:51.985851 systemd-resolved[234]: Defaulting to hostname 'linux'. Sep 16 04:51:51.987762 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:51:52.147581 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:51:51.997797 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:51:52.162554 kernel: iscsi: registered transport (tcp) Sep 16 04:51:52.188362 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:51:52.188465 kernel: QLogic iSCSI HBA Driver Sep 16 04:51:52.212988 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:51:52.248341 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:51:52.271539 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:51:52.335314 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:51:52.355673 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:51:52.433431 kernel: raid6: avx2x4 gen() 18281 MB/s Sep 16 04:51:52.454432 kernel: raid6: avx2x2 gen() 17994 MB/s Sep 16 04:51:52.480442 kernel: raid6: avx2x1 gen() 14159 MB/s Sep 16 04:51:52.480525 kernel: raid6: using algorithm avx2x4 gen() 18281 MB/s Sep 16 04:51:52.507452 kernel: raid6: .... xor() 7784 MB/s, rmw enabled Sep 16 04:51:52.507538 kernel: raid6: using avx2x2 recovery algorithm Sep 16 04:51:52.536423 kernel: xor: automatically using best checksumming function avx Sep 16 04:51:52.724429 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:51:52.732890 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:51:52.743921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:51:52.805942 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 16 04:51:52.814900 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:51:52.839736 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:51:52.879130 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Sep 16 04:51:52.912045 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:51:52.914171 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:51:53.028930 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:51:53.043496 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:51:53.151405 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 04:51:53.151476 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Sep 16 04:51:53.212345 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 16 04:51:53.239447 kernel: AES CTR mode by8 optimization enabled Sep 16 04:51:53.284047 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:51:53.321231 kernel: scsi host0: Virtio SCSI HBA Sep 16 04:51:53.321569 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 16 04:51:53.284269 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:51:53.299366 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:51:53.366113 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 16 04:51:53.366480 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 16 04:51:53.366706 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 16 04:51:53.366917 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 16 04:51:53.367127 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 16 04:51:53.333526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:51:53.423293 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:51:53.423348 kernel: GPT:17805311 != 25165823 Sep 16 04:51:53.423404 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:51:53.423434 kernel: GPT:17805311 != 25165823 Sep 16 04:51:53.423464 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:51:53.423493 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:51:53.376541 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:51:53.439589 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 16 04:51:53.472809 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:51:53.517317 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:51:53.548259 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 16 04:51:53.567634 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 16 04:51:53.568706 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 16 04:51:53.604898 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 16 04:51:53.624792 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 16 04:51:53.630812 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:51:53.649802 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:51:53.669793 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:51:53.689878 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:51:53.705786 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:51:53.745828 disk-uuid[605]: Primary Header is updated. Sep 16 04:51:53.745828 disk-uuid[605]: Secondary Entries is updated. Sep 16 04:51:53.745828 disk-uuid[605]: Secondary Header is updated. Sep 16 04:51:53.769522 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:51:53.761368 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:51:53.801426 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:51:54.821590 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:51:54.822010 disk-uuid[606]: The operation has completed successfully. Sep 16 04:51:54.901663 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:51:54.901817 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:51:54.949515 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:51:54.977748 sh[627]: Success Sep 16 04:51:55.014514 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:51:55.014604 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:51:55.015448 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:51:55.041460 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 16 04:51:55.123906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:51:55.128495 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:51:55.161789 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:51:55.208232 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (640) Sep 16 04:51:55.208279 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 04:51:55.208320 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:51:55.234591 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 04:51:55.234697 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:51:55.234723 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:51:55.245025 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:51:55.245852 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:51:55.258846 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:51:55.259907 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:51:55.276774 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:51:55.338422 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (663) Sep 16 04:51:55.348422 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:51:55.348496 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:51:55.373406 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:51:55.373481 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:51:55.373505 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:51:55.393472 kernel: BTRFS info (device sda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:51:55.395101 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:51:55.407251 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:51:55.503959 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:51:55.529557 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:51:55.652028 systemd-networkd[810]: lo: Link UP Sep 16 04:51:55.652042 systemd-networkd[810]: lo: Gained carrier Sep 16 04:51:55.659037 systemd-networkd[810]: Enumeration completed Sep 16 04:51:55.659200 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:51:55.659613 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:51:55.686255 ignition[731]: Ignition 2.22.0 Sep 16 04:51:55.659620 systemd-networkd[810]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:51:55.686268 ignition[731]: Stage: fetch-offline Sep 16 04:51:55.661582 systemd-networkd[810]: eth0: Link UP Sep 16 04:51:55.686323 ignition[731]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:51:55.661941 systemd-networkd[810]: eth0: Gained carrier Sep 16 04:51:55.686339 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 04:51:55.661960 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:51:55.686637 ignition[731]: parsed url from cmdline: "" Sep 16 04:51:55.670023 systemd[1]: Reached target network.target - Network. Sep 16 04:51:55.686644 ignition[731]: no config URL provided Sep 16 04:51:55.684480 systemd-networkd[810]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7.c.flatcar-212911.internal' to 'ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7' Sep 16 04:51:55.686655 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:51:55.684500 systemd-networkd[810]: eth0: DHCPv4 address 10.128.0.59/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 16 04:51:55.686670 ignition[731]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:51:55.691897 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:51:55.686680 ignition[731]: failed to fetch config: resource requires networking Sep 16 04:51:55.712414 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 04:51:55.686904 ignition[731]: Ignition finished successfully Sep 16 04:51:55.785581 unknown[820]: fetched base config from "system" Sep 16 04:51:55.774999 ignition[820]: Ignition 2.22.0 Sep 16 04:51:55.785594 unknown[820]: fetched base config from "system" Sep 16 04:51:55.775009 ignition[820]: Stage: fetch Sep 16 04:51:55.785605 unknown[820]: fetched user config from "gcp" Sep 16 04:51:55.775182 ignition[820]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:51:55.790532 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 04:51:55.775193 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 04:51:55.814935 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:51:55.775310 ignition[820]: parsed url from cmdline: "" Sep 16 04:51:55.873568 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:51:55.775315 ignition[820]: no config URL provided Sep 16 04:51:55.893199 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:51:55.775321 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:51:55.932847 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:51:55.775331 ignition[820]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:51:55.947268 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:51:55.775375 ignition[820]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 16 04:51:55.961559 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:51:55.779056 ignition[820]: GET result: OK Sep 16 04:51:55.977563 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:51:55.779149 ignition[820]: parsing config with SHA512: fb6a88c94949c65dbd963368407c1d1796a21fd4b0e1a249172d770c33a78953f13defda19d1ad5f7190afe784a5f347c875a3df19bfb9f5ac0358557d0c60fe Sep 16 04:51:55.990588 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:51:55.787017 ignition[820]: fetch: fetch complete Sep 16 04:51:56.003553 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:51:55.787035 ignition[820]: fetch: fetch passed Sep 16 04:51:56.019095 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:51:55.787120 ignition[820]: Ignition finished successfully Sep 16 04:51:55.870010 ignition[827]: Ignition 2.22.0 Sep 16 04:51:55.870019 ignition[827]: Stage: kargs Sep 16 04:51:55.870183 ignition[827]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:51:55.870197 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 04:51:55.871772 ignition[827]: kargs: kargs passed Sep 16 04:51:55.871833 ignition[827]: Ignition finished successfully Sep 16 04:51:55.929773 ignition[832]: Ignition 2.22.0 Sep 16 04:51:55.929781 ignition[832]: Stage: disks Sep 16 04:51:55.929948 ignition[832]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:51:55.929965 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 04:51:55.931123 ignition[832]: disks: disks passed Sep 16 04:51:55.931180 ignition[832]: Ignition finished successfully Sep 16 04:51:56.080396 systemd-fsck[842]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 16 04:51:56.243154 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:51:56.264858 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:51:56.454440 kernel: EXT4-fs (sda9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 04:51:56.455217 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:51:56.456096 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:51:56.471653 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:51:56.503147 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:51:56.519449 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (850) Sep 16 04:51:56.538943 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:51:56.539034 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:51:56.543205 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 04:51:56.584594 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:51:56.584643 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:51:56.584669 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:51:56.543300 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:51:56.543348 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:51:56.552716 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:51:56.593416 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:51:56.610952 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:51:56.737276 initrd-setup-root[874]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:51:56.747785 initrd-setup-root[881]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:51:56.756581 initrd-setup-root[888]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:51:56.766559 initrd-setup-root[895]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:51:56.913930 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:51:56.933268 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:51:56.941544 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:51:56.972744 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:51:56.989635 kernel: BTRFS info (device sda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:51:57.011945 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:51:57.028146 ignition[962]: INFO : Ignition 2.22.0 Sep 16 04:51:57.028146 ignition[962]: INFO : Stage: mount Sep 16 04:51:57.050536 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:51:57.050536 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 04:51:57.050536 ignition[962]: INFO : mount: mount passed Sep 16 04:51:57.050536 ignition[962]: INFO : Ignition finished successfully Sep 16 04:51:57.031307 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:51:57.045761 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:51:57.457371 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:51:57.498469 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (975) Sep 16 04:51:57.516910 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:51:57.516996 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:51:57.533419 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:51:57.533507 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:51:57.533532 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:51:57.542096 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:51:57.586601 ignition[992]: INFO : Ignition 2.22.0 Sep 16 04:51:57.586601 ignition[992]: INFO : Stage: files Sep 16 04:51:57.599532 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:51:57.599532 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 04:51:57.599532 ignition[992]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:51:57.599532 ignition[992]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:51:57.599532 ignition[992]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:51:57.599532 ignition[992]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:51:57.599532 ignition[992]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:51:57.599532 ignition[992]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:51:57.594683 unknown[992]: wrote ssh authorized keys file for user: core Sep 16 04:51:57.693545 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 16 04:51:57.693545 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 16 04:51:57.694573 systemd-networkd[810]: eth0: Gained IPv6LL Sep 16 04:51:57.731514 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:51:58.256422 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 16 04:51:58.256422 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:51:58.256422 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 16 04:51:58.507933 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 16 04:51:58.672186 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:51:58.686559 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 16 04:51:59.029478 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 16 04:51:59.601848 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:51:59.601848 ignition[992]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 16 04:51:59.638579 ignition[992]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:51:59.638579 ignition[992]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:51:59.638579 ignition[992]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 16 04:51:59.638579 ignition[992]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:51:59.638579 ignition[992]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:51:59.638579 ignition[992]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:51:59.638579 ignition[992]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:51:59.638579 ignition[992]: INFO : files: files passed Sep 16 04:51:59.638579 ignition[992]: INFO : Ignition finished successfully Sep 16 04:51:59.611084 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:51:59.621408 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:51:59.639879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:51:59.698568 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:51:59.841530 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:51:59.841530 initrd-setup-root-after-ignition[1021]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:51:59.698736 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:51:59.875631 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:51:59.709930 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:51:59.731861 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:51:59.754596 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:51:59.875464 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:51:59.875661 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:51:59.876112 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:51:59.906711 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:51:59.925792 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:51:59.926985 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:52:00.019686 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:52:00.040695 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:52:00.081218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:52:00.100709 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:52:00.101139 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:52:00.128909 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:52:00.129130 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:52:00.162878 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:52:00.171782 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:52:00.188861 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:52:00.204868 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:52:00.223803 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:52:00.242963 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:52:00.261777 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:52:00.279850 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:52:00.298853 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:52:00.317792 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:52:00.335839 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:52:00.351706 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:52:00.351973 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:52:00.373909 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:52:00.382953 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:52:00.418675 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:52:00.419033 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:52:00.446690 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:52:00.447026 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:52:00.472820 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:52:00.473199 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:52:00.491893 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:52:00.492092 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:52:00.512092 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:52:00.534588 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:52:00.534975 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:52:00.553902 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:52:00.580416 ignition[1046]: INFO : Ignition 2.22.0 Sep 16 04:52:00.580416 ignition[1046]: INFO : Stage: umount Sep 16 04:52:00.609567 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:52:00.609567 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 04:52:00.609567 ignition[1046]: INFO : umount: umount passed Sep 16 04:52:00.609567 ignition[1046]: INFO : Ignition finished successfully Sep 16 04:52:00.587575 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:52:00.588186 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:52:00.603822 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:52:00.604001 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:52:00.653464 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:52:00.654822 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:52:00.654940 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:52:00.669168 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:52:00.669330 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:52:00.689591 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:52:00.689722 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:52:00.708346 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:52:00.708460 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:52:00.723651 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:52:00.723741 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:52:00.739679 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 04:52:00.739781 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 04:52:00.764761 systemd[1]: Stopped target network.target - Network. Sep 16 04:52:00.771787 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:52:00.771876 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:52:00.785878 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:52:00.810666 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:52:00.816479 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:52:00.820749 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:52:00.837795 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:52:00.853855 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:52:00.853920 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:52:00.869857 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:52:00.869916 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:52:00.885836 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:52:00.885925 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:52:00.901945 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:52:00.902022 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:52:00.933850 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:52:00.933963 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:52:00.943037 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:52:00.973846 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:52:00.984215 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:52:00.984422 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:52:01.007228 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:52:01.007546 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:52:01.007687 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:52:01.037406 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:52:01.038936 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:52:01.042774 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:52:01.042833 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:52:01.075939 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:52:01.092539 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:52:01.092676 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:52:01.109737 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:52:01.109845 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:52:01.145906 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:52:01.145984 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:52:01.163768 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:52:01.163865 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:52:01.182970 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:52:01.193826 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:52:01.193938 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:52:01.197836 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:52:01.198090 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:52:01.219302 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:52:01.219414 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:52:01.244666 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:52:01.244741 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:52:01.263634 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:52:01.606617 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 16 04:52:01.263744 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:52:01.290605 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:52:01.290741 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:52:01.318583 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:52:01.318751 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:52:01.346678 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:52:01.354799 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:52:01.354908 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:52:01.392979 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:52:01.393055 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:52:01.421939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:52:01.422015 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:52:01.442624 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 04:52:01.442703 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 04:52:01.442748 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:52:01.443294 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:52:01.443432 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:52:01.450126 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:52:01.450255 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:52:01.487455 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:52:01.497948 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:52:01.554661 systemd[1]: Switching root. Sep 16 04:52:01.817580 systemd-journald[207]: Journal stopped Sep 16 04:52:04.516337 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:52:04.516426 kernel: SELinux: policy capability open_perms=1 Sep 16 04:52:04.516453 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:52:04.516475 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:52:04.516495 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:52:04.516517 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:52:04.516547 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:52:04.516570 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:52:04.516592 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:52:04.516615 kernel: audit: type=1403 audit(1757998322.341:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:52:04.516641 systemd[1]: Successfully loaded SELinux policy in 122.500ms. Sep 16 04:52:04.516666 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.722ms. Sep 16 04:52:04.516692 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:52:04.516722 systemd[1]: Detected virtualization google. Sep 16 04:52:04.516747 systemd[1]: Detected architecture x86-64. Sep 16 04:52:04.516771 systemd[1]: Detected first boot. Sep 16 04:52:04.516796 systemd[1]: Initializing machine ID from random generator. Sep 16 04:52:04.516824 zram_generator::config[1089]: No configuration found. Sep 16 04:52:04.516853 kernel: Guest personality initialized and is inactive Sep 16 04:52:04.516877 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 04:52:04.516900 kernel: Initialized host personality Sep 16 04:52:04.516923 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:52:04.516947 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:52:04.516973 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:52:04.516997 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:52:04.517026 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:52:04.517050 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:52:04.517075 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:52:04.517099 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:52:04.517125 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:52:04.517157 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:52:04.517182 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:52:04.517212 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:52:04.517238 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:52:04.517264 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:52:04.517289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:52:04.517315 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:52:04.517340 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:52:04.517368 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:52:04.517409 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:52:04.517443 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:52:04.517473 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 04:52:04.517499 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:52:04.517524 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:52:04.517550 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:52:04.517574 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:52:04.517600 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:52:04.517625 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:52:04.517655 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:52:04.517680 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:52:04.517706 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:52:04.517731 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:52:04.517756 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:52:04.517781 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:52:04.517806 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:52:04.517838 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:52:04.517864 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:52:04.517889 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:52:04.517915 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:52:04.517943 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:52:04.517968 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:52:04.517999 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:52:04.518025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:52:04.518052 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:52:04.518077 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:52:04.518103 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:52:04.518135 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:52:04.518161 systemd[1]: Reached target machines.target - Containers. Sep 16 04:52:04.518187 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:52:04.518218 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:52:04.518244 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:52:04.518270 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:52:04.518296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:52:04.518322 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:52:04.518347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:52:04.518373 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:52:04.518412 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:52:04.518439 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:52:04.518469 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:52:04.518495 kernel: fuse: init (API version 7.41) Sep 16 04:52:04.518520 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:52:04.518546 kernel: ACPI: bus type drm_connector registered Sep 16 04:52:04.518569 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:52:04.518595 kernel: loop: module loaded Sep 16 04:52:04.518620 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:52:04.518647 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:52:04.518678 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:52:04.518704 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:52:04.518730 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:52:04.518805 systemd-journald[1177]: Collecting audit messages is disabled. Sep 16 04:52:04.518857 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:52:04.518885 systemd-journald[1177]: Journal started Sep 16 04:52:04.518930 systemd-journald[1177]: Runtime Journal (/run/log/journal/b0af3b9b963e4b1e99708ab8a18d7afb) is 8M, max 148.9M, 140.9M free. Sep 16 04:52:03.309479 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:52:03.335363 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 16 04:52:03.336077 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:52:04.554648 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:52:04.575434 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:52:04.592455 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:52:04.599442 systemd[1]: Stopped verity-setup.service. Sep 16 04:52:04.623428 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:52:04.635429 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:52:04.646120 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:52:04.655806 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:52:04.666792 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:52:04.675807 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:52:04.684743 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:52:04.693815 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:52:04.703997 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:52:04.715092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:52:04.725985 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:52:04.726286 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:52:04.737004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:52:04.737317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:52:04.747997 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:52:04.748419 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:52:04.757912 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:52:04.758206 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:52:04.768942 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:52:04.769259 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:52:04.778916 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:52:04.779218 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:52:04.789006 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:52:04.800041 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:52:04.811075 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:52:04.822052 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:52:04.833095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:52:04.856611 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:52:04.867188 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:52:04.885533 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:52:04.894619 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:52:04.894870 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:52:04.904926 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:52:04.917144 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:52:04.925838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:52:04.934809 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:52:04.946592 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:52:04.956636 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:52:04.959034 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:52:04.968631 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:52:04.980338 systemd-journald[1177]: Time spent on flushing to /var/log/journal/b0af3b9b963e4b1e99708ab8a18d7afb is 84.788ms for 961 entries. Sep 16 04:52:04.980338 systemd-journald[1177]: System Journal (/var/log/journal/b0af3b9b963e4b1e99708ab8a18d7afb) is 8M, max 584.8M, 576.8M free. Sep 16 04:52:05.125324 systemd-journald[1177]: Received client request to flush runtime journal. Sep 16 04:52:05.125450 kernel: loop0: detected capacity change from 0 to 110984 Sep 16 04:52:04.975556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:52:04.996659 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:52:05.012192 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:52:05.027137 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:52:05.037758 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:52:05.049668 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:52:05.066043 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:52:05.079982 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:52:05.121119 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:52:05.131488 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:52:05.156484 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:52:05.157695 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:52:05.182418 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:52:05.209622 kernel: loop1: detected capacity change from 0 to 50736 Sep 16 04:52:05.208901 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:52:05.222512 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:52:05.291440 kernel: loop2: detected capacity change from 0 to 221472 Sep 16 04:52:05.292987 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 16 04:52:05.293021 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Sep 16 04:52:05.302043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:52:05.415432 kernel: loop3: detected capacity change from 0 to 128016 Sep 16 04:52:05.492986 kernel: loop4: detected capacity change from 0 to 110984 Sep 16 04:52:05.549475 kernel: loop5: detected capacity change from 0 to 50736 Sep 16 04:52:05.593419 kernel: loop6: detected capacity change from 0 to 221472 Sep 16 04:52:05.645452 kernel: loop7: detected capacity change from 0 to 128016 Sep 16 04:52:05.692510 (sd-merge)[1235]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 16 04:52:05.693532 (sd-merge)[1235]: Merged extensions into '/usr'. Sep 16 04:52:05.703079 systemd[1]: Reload requested from client PID 1212 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:52:05.703497 systemd[1]: Reloading... Sep 16 04:52:05.882842 zram_generator::config[1258]: No configuration found. Sep 16 04:52:06.126424 ldconfig[1207]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:52:06.364831 systemd[1]: Reloading finished in 660 ms. Sep 16 04:52:06.385066 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:52:06.394445 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:52:06.414164 systemd[1]: Starting ensure-sysext.service... Sep 16 04:52:06.430452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:52:06.462811 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:52:06.481579 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:52:06.481637 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:52:06.482134 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:52:06.482692 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:52:06.483696 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:52:06.484675 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:52:06.485254 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Sep 16 04:52:06.485405 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Sep 16 04:52:06.492923 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:52:06.492943 systemd-tmpfiles[1303]: Skipping /boot Sep 16 04:52:06.496349 systemd[1]: Reload requested from client PID 1302 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:52:06.496593 systemd[1]: Reloading... Sep 16 04:52:06.508278 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:52:06.508306 systemd-tmpfiles[1303]: Skipping /boot Sep 16 04:52:06.561561 systemd-udevd[1306]: Using default interface naming scheme 'v255'. Sep 16 04:52:06.667442 zram_generator::config[1339]: No configuration found. Sep 16 04:52:07.097417 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:52:07.153397 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 16 04:52:07.214274 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 16 04:52:07.212555 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 04:52:07.213298 systemd[1]: Reloading finished in 715 ms. Sep 16 04:52:07.223977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:52:07.238413 kernel: ACPI: button: Power Button [PWRF] Sep 16 04:52:07.272716 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:52:07.279462 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 16 04:52:07.356017 systemd[1]: Finished ensure-sysext.service. Sep 16 04:52:07.362406 kernel: ACPI: button: Sleep Button [SLPF] Sep 16 04:52:07.404960 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 16 04:52:07.416274 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 16 04:52:07.425637 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:52:07.429358 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:52:07.439777 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:52:07.449845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:52:07.452062 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:52:07.463414 kernel: EDAC MC: Ver: 3.0.0 Sep 16 04:52:07.468006 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:52:07.479275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:52:07.492787 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:52:07.504378 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 16 04:52:07.511868 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:52:07.518120 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:52:07.528537 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:52:07.536741 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:52:07.561215 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:52:07.575991 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:52:07.584562 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:52:07.598197 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:52:07.608530 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:52:07.610428 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:52:07.612608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:52:07.624255 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:52:07.624893 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:52:07.634045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:52:07.634474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:52:07.645027 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:52:07.645447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:52:07.679648 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 16 04:52:07.701370 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:52:07.713192 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:52:07.748193 augenrules[1466]: No rules Sep 16 04:52:07.751815 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 16 04:52:07.760571 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:52:07.760701 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:52:07.767045 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:52:07.782517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:52:07.792327 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:52:07.792705 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:52:07.802171 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:52:07.817523 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:52:07.837652 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:52:07.837815 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:52:07.839550 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 16 04:52:07.862667 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:52:07.880613 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:52:07.979738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:52:07.998406 systemd-resolved[1443]: Positive Trust Anchors: Sep 16 04:52:07.998430 systemd-resolved[1443]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:52:07.998500 systemd-resolved[1443]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:52:08.005514 systemd-resolved[1443]: Defaulting to hostname 'linux'. Sep 16 04:52:08.008258 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:52:08.013112 systemd-networkd[1440]: lo: Link UP Sep 16 04:52:08.013467 systemd-networkd[1440]: lo: Gained carrier Sep 16 04:52:08.015597 systemd-networkd[1440]: Enumeration completed Sep 16 04:52:08.016250 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:52:08.016258 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:52:08.016962 systemd-networkd[1440]: eth0: Link UP Sep 16 04:52:08.017202 systemd-networkd[1440]: eth0: Gained carrier Sep 16 04:52:08.017240 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:52:08.017711 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:52:08.026775 systemd[1]: Reached target network.target - Network. Sep 16 04:52:08.034577 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:52:08.036470 systemd-networkd[1440]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7.c.flatcar-212911.internal' to 'ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7' Sep 16 04:52:08.036499 systemd-networkd[1440]: eth0: DHCPv4 address 10.128.0.59/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 16 04:52:08.044631 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:52:08.053730 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:52:08.063611 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:52:08.073577 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 04:52:08.083776 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:52:08.092833 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:52:08.103624 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:52:08.113610 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:52:08.113680 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:52:08.121631 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:52:08.131925 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:52:08.142373 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:52:08.151856 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:52:08.162849 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:52:08.173646 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:52:08.197478 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:52:08.207094 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:52:08.219444 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:52:08.232550 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:52:08.244650 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:52:08.255345 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:52:08.265567 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:52:08.273625 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:52:08.273670 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:52:08.280728 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:52:08.292848 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 04:52:08.307868 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:52:08.323771 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:52:08.347647 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:52:08.358322 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:52:08.367535 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:52:08.369641 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 04:52:08.377612 jq[1511]: false Sep 16 04:52:08.381667 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:52:08.394710 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 04:52:08.395758 coreos-metadata[1508]: Sep 16 04:52:08.395 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 16 04:52:08.397322 coreos-metadata[1508]: Sep 16 04:52:08.397 INFO Fetch successful Sep 16 04:52:08.397512 coreos-metadata[1508]: Sep 16 04:52:08.397 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 16 04:52:08.398223 coreos-metadata[1508]: Sep 16 04:52:08.398 INFO Fetch successful Sep 16 04:52:08.398301 coreos-metadata[1508]: Sep 16 04:52:08.398 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 16 04:52:08.398832 coreos-metadata[1508]: Sep 16 04:52:08.398 INFO Fetch successful Sep 16 04:52:08.398832 coreos-metadata[1508]: Sep 16 04:52:08.398 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 16 04:52:08.398982 coreos-metadata[1508]: Sep 16 04:52:08.398 INFO Fetch successful Sep 16 04:52:08.407253 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:52:08.419744 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing passwd entry cache Sep 16 04:52:08.420233 oslogin_cache_refresh[1515]: Refreshing passwd entry cache Sep 16 04:52:08.422757 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:52:08.424753 extend-filesystems[1514]: Found /dev/sda6 Sep 16 04:52:08.427719 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:52:08.429170 oslogin_cache_refresh[1515]: Failure getting users, quitting Sep 16 04:52:08.436243 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting users, quitting Sep 16 04:52:08.436243 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:52:08.436243 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing group entry cache Sep 16 04:52:08.429200 oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:52:08.436680 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting groups, quitting Sep 16 04:52:08.436680 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:52:08.429291 oslogin_cache_refresh[1515]: Refreshing group entry cache Sep 16 04:52:08.436263 oslogin_cache_refresh[1515]: Failure getting groups, quitting Sep 16 04:52:08.436282 oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:52:08.445998 extend-filesystems[1514]: Found /dev/sda9 Sep 16 04:52:08.474696 extend-filesystems[1514]: Checking size of /dev/sda9 Sep 16 04:52:08.455737 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:52:08.466955 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 16 04:52:08.467821 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:52:08.470697 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:52:08.495561 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:52:08.509774 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:52:08.511658 extend-filesystems[1514]: Resized partition /dev/sda9 Sep 16 04:52:08.526852 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:52:08.540085 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:52:08.540473 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:52:08.541523 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 04:52:08.541856 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 04:52:08.548445 ntpd[1517]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 04:52:08.549796 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 04:52:08.549796 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:52:08.549796 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: ---------------------------------------------------- Sep 16 04:52:08.549796 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:52:08.549796 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:52:08.549796 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: corporation. Support and training for ntp-4 are Sep 16 04:52:08.549796 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: available at https://www.nwtime.org/support Sep 16 04:52:08.549796 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: ---------------------------------------------------- Sep 16 04:52:08.548639 ntpd[1517]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:52:08.548657 ntpd[1517]: ---------------------------------------------------- Sep 16 04:52:08.548671 ntpd[1517]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:52:08.548684 ntpd[1517]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:52:08.548698 ntpd[1517]: corporation. Support and training for ntp-4 are Sep 16 04:52:08.548710 ntpd[1517]: available at https://www.nwtime.org/support Sep 16 04:52:08.548724 ntpd[1517]: ---------------------------------------------------- Sep 16 04:52:08.551056 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:52:08.554671 extend-filesystems[1547]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:52:08.551526 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:52:08.568676 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: proto: precision = 0.104 usec (-23) Sep 16 04:52:08.559204 ntpd[1517]: proto: precision = 0.104 usec (-23) Sep 16 04:52:08.564144 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:52:08.564957 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:52:08.571015 ntpd[1517]: basedate set to 2025-09-04 Sep 16 04:52:08.571052 ntpd[1517]: gps base set to 2025-09-07 (week 2383) Sep 16 04:52:08.571227 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: basedate set to 2025-09-04 Sep 16 04:52:08.571227 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: gps base set to 2025-09-07 (week 2383) Sep 16 04:52:08.571327 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:52:08.571327 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:52:08.571231 ntpd[1517]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:52:08.571276 ntpd[1517]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:52:08.571605 ntpd[1517]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:52:08.571660 ntpd[1517]: Listen normally on 3 eth0 10.128.0.59:123 Sep 16 04:52:08.571745 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:52:08.571745 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: Listen normally on 3 eth0 10.128.0.59:123 Sep 16 04:52:08.571745 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: Listen normally on 4 lo [::1]:123 Sep 16 04:52:08.571703 ntpd[1517]: Listen normally on 4 lo [::1]:123 Sep 16 04:52:08.571931 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: bind(21) AF_INET6 [fe80::4001:aff:fe80:3b%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 04:52:08.571931 ntpd[1517]: 16 Sep 04:52:08 ntpd[1517]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:3b%2]:123 Sep 16 04:52:08.571747 ntpd[1517]: bind(21) AF_INET6 [fe80::4001:aff:fe80:3b%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 04:52:08.571776 ntpd[1517]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:3b%2]:123 Sep 16 04:52:08.615277 kernel: ntpd[1517]: segfault at 24 ip 0000558c14425aeb sp 00007ffd4ab983e0 error 4 in ntpd[68aeb,558c143c3000+80000] likely on CPU 0 (core 0, socket 0) Sep 16 04:52:08.615437 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Sep 16 04:52:08.625931 update_engine[1534]: I20250916 04:52:08.616217 1534 main.cc:92] Flatcar Update Engine starting Sep 16 04:52:08.637463 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 16 04:52:08.647948 jq[1539]: true Sep 16 04:52:08.656107 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 16 04:52:08.685820 extend-filesystems[1547]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 16 04:52:08.685820 extend-filesystems[1547]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 16 04:52:08.685820 extend-filesystems[1547]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 16 04:52:08.724559 extend-filesystems[1514]: Resized filesystem in /dev/sda9 Sep 16 04:52:08.688342 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:52:08.732829 jq[1550]: true Sep 16 04:52:08.690630 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:52:08.729632 systemd-coredump[1569]: Process 1517 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Sep 16 04:52:08.747036 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:52:08.772004 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:52:08.786050 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Sep 16 04:52:08.803735 systemd[1]: Started systemd-coredump@0-1569-0.service - Process Core Dump (PID 1569/UID 0). Sep 16 04:52:08.816513 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 04:52:08.849242 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:52:08.857738 tar[1548]: linux-amd64/helm Sep 16 04:52:09.011138 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:52:09.021835 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:52:09.044884 systemd[1]: Starting sshkeys.service... Sep 16 04:52:09.105884 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 16 04:52:09.118347 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 16 04:52:09.120061 systemd-logind[1532]: Watching system buttons on /dev/input/event2 (Power Button) Sep 16 04:52:09.120119 systemd-logind[1532]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 16 04:52:09.120152 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 04:52:09.120463 systemd-logind[1532]: New seat seat0. Sep 16 04:52:09.131178 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:52:09.152565 systemd-networkd[1440]: eth0: Gained IPv6LL Sep 16 04:52:09.164119 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:52:09.175656 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:52:09.181950 dbus-daemon[1509]: [system] SELinux support is enabled Sep 16 04:52:09.188173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:52:09.205537 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:52:09.212783 dbus-daemon[1509]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1440 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 16 04:52:09.213346 update_engine[1534]: I20250916 04:52:09.213006 1534 update_check_scheduler.cc:74] Next update check in 6m14s Sep 16 04:52:09.219543 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 16 04:52:09.227109 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:52:09.243841 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:52:09.244072 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:52:09.254291 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 16 04:52:09.254919 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:52:09.255140 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:52:09.266637 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:52:09.277853 init.sh[1602]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 16 04:52:09.285052 init.sh[1602]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 16 04:52:09.285052 init.sh[1602]: + /usr/bin/google_instance_setup Sep 16 04:52:09.287969 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 16 04:52:09.332342 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:52:09.377578 coreos-metadata[1593]: Sep 16 04:52:09.375 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 16 04:52:09.386107 coreos-metadata[1593]: Sep 16 04:52:09.380 INFO Fetch failed with 404: resource not found Sep 16 04:52:09.386515 coreos-metadata[1593]: Sep 16 04:52:09.386 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 16 04:52:09.387322 coreos-metadata[1593]: Sep 16 04:52:09.387 INFO Fetch successful Sep 16 04:52:09.387322 coreos-metadata[1593]: Sep 16 04:52:09.387 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 16 04:52:09.389662 coreos-metadata[1593]: Sep 16 04:52:09.389 INFO Fetch failed with 404: resource not found Sep 16 04:52:09.389662 coreos-metadata[1593]: Sep 16 04:52:09.389 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 16 04:52:09.391718 coreos-metadata[1593]: Sep 16 04:52:09.389 INFO Fetch failed with 404: resource not found Sep 16 04:52:09.391718 coreos-metadata[1593]: Sep 16 04:52:09.389 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 16 04:52:09.391718 coreos-metadata[1593]: Sep 16 04:52:09.391 INFO Fetch successful Sep 16 04:52:09.399495 unknown[1593]: wrote ssh authorized keys file for user: core Sep 16 04:52:09.463815 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:52:09.543028 update-ssh-keys[1613]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:52:09.540954 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 16 04:52:09.564983 systemd[1]: Finished sshkeys.service. Sep 16 04:52:09.644667 sshd_keygen[1546]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:52:09.767401 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 16 04:52:09.767004 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:52:09.767405 systemd-coredump[1573]: Process 1517 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1517: #0 0x0000558c14425aeb n/a (ntpd + 0x68aeb) #1 0x0000558c143cecdf n/a (ntpd + 0x11cdf) #2 0x0000558c143cf575 n/a (ntpd + 0x12575) #3 0x0000558c143cad8a n/a (ntpd + 0xdd8a) #4 0x0000558c143cc5d3 n/a (ntpd + 0xf5d3) #5 0x0000558c143d4fd1 n/a (ntpd + 0x17fd1) #6 0x0000558c143c5c2d n/a (ntpd + 0x8c2d) #7 0x00007f09161d716c n/a (libc.so.6 + 0x2716c) #8 0x00007f09161d7229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000558c143c5c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Sep 16 04:52:09.776564 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 16 04:52:09.768290 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1604 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 16 04:52:09.786247 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Sep 16 04:52:09.787691 systemd[1]: ntpd.service: Failed with result 'core-dump'. Sep 16 04:52:09.799276 systemd[1]: systemd-coredump@0-1569-0.service: Deactivated successfully. Sep 16 04:52:09.855274 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:52:09.866931 systemd[1]: Starting polkit.service - Authorization Manager... Sep 16 04:52:09.878527 systemd[1]: Started sshd@0-10.128.0.59:22-139.178.68.195:45574.service - OpenSSH per-connection server daemon (139.178.68.195:45574). Sep 16 04:52:09.891439 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Sep 16 04:52:09.901521 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 04:52:09.951687 ntpd[1640]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 04:52:09.953110 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 04:52:09.953110 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:52:09.953110 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: ---------------------------------------------------- Sep 16 04:52:09.953110 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:52:09.953110 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:52:09.953110 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: corporation. Support and training for ntp-4 are Sep 16 04:52:09.953110 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: available at https://www.nwtime.org/support Sep 16 04:52:09.953110 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: ---------------------------------------------------- Sep 16 04:52:09.953110 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: proto: precision = 0.084 usec (-23) Sep 16 04:52:09.953110 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: basedate set to 2025-09-04 Sep 16 04:52:09.953110 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: gps base set to 2025-09-07 (week 2383) Sep 16 04:52:09.951768 ntpd[1640]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:52:09.954163 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:52:09.954163 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:52:09.954163 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:52:09.951784 ntpd[1640]: ---------------------------------------------------- Sep 16 04:52:09.954372 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: Listen normally on 3 eth0 10.128.0.59:123 Sep 16 04:52:09.954372 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: Listen normally on 4 lo [::1]:123 Sep 16 04:52:09.954372 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:3b%2]:123 Sep 16 04:52:09.954372 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: Listening on routing socket on fd #22 for interface updates Sep 16 04:52:09.951797 ntpd[1640]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:52:09.964480 containerd[1551]: time="2025-09-16T04:52:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:52:09.964805 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:52:09.964805 ntpd[1640]: 16 Sep 04:52:09 ntpd[1640]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:52:09.951811 ntpd[1640]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:52:09.951824 ntpd[1640]: corporation. Support and training for ntp-4 are Sep 16 04:52:09.951837 ntpd[1640]: available at https://www.nwtime.org/support Sep 16 04:52:09.951851 ntpd[1640]: ---------------------------------------------------- Sep 16 04:52:09.952773 ntpd[1640]: proto: precision = 0.084 usec (-23) Sep 16 04:52:09.953085 ntpd[1640]: basedate set to 2025-09-04 Sep 16 04:52:09.953105 ntpd[1640]: gps base set to 2025-09-07 (week 2383) Sep 16 04:52:09.953220 ntpd[1640]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:52:09.953262 ntpd[1640]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:52:09.954135 ntpd[1640]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:52:09.954181 ntpd[1640]: Listen normally on 3 eth0 10.128.0.59:123 Sep 16 04:52:09.954225 ntpd[1640]: Listen normally on 4 lo [::1]:123 Sep 16 04:52:09.954264 ntpd[1640]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:3b%2]:123 Sep 16 04:52:09.954302 ntpd[1640]: Listening on routing socket on fd #22 for interface updates Sep 16 04:52:09.956606 ntpd[1640]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:52:09.956638 ntpd[1640]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:52:09.971343 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:52:09.972082 containerd[1551]: time="2025-09-16T04:52:09.972018723Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:52:09.974500 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:52:09.991859 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:52:10.035125 locksmithd[1605]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:52:10.094251 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:52:10.102535 containerd[1551]: time="2025-09-16T04:52:10.102337145Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="700.299µs" Sep 16 04:52:10.102729 containerd[1551]: time="2025-09-16T04:52:10.102699581Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:52:10.102863 containerd[1551]: time="2025-09-16T04:52:10.102840006Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:52:10.103168 containerd[1551]: time="2025-09-16T04:52:10.103137797Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:52:10.104126 containerd[1551]: time="2025-09-16T04:52:10.104074722Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:52:10.104291 containerd[1551]: time="2025-09-16T04:52:10.104268047Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:52:10.108072 containerd[1551]: time="2025-09-16T04:52:10.104592865Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:52:10.108072 containerd[1551]: time="2025-09-16T04:52:10.104631888Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:52:10.108072 containerd[1551]: time="2025-09-16T04:52:10.104958718Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:52:10.108072 containerd[1551]: time="2025-09-16T04:52:10.104983067Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:52:10.108072 containerd[1551]: time="2025-09-16T04:52:10.105002140Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:52:10.108072 containerd[1551]: time="2025-09-16T04:52:10.105019405Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:52:10.108072 containerd[1551]: time="2025-09-16T04:52:10.105140485Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:52:10.110203 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:52:10.112300 containerd[1551]: time="2025-09-16T04:52:10.112263640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:52:10.114927 containerd[1551]: time="2025-09-16T04:52:10.112465519Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:52:10.114927 containerd[1551]: time="2025-09-16T04:52:10.112503822Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:52:10.114927 containerd[1551]: time="2025-09-16T04:52:10.113700473Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:52:10.115927 containerd[1551]: time="2025-09-16T04:52:10.115892013Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:52:10.116172 containerd[1551]: time="2025-09-16T04:52:10.116135545Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:52:10.128148 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131178042Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131315613Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131341825Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131487522Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131515628Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131534328Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131555148Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131573808Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131593835Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131629179Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131647214Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:52:10.132573 containerd[1551]: time="2025-09-16T04:52:10.131670812Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:52:10.139712 containerd[1551]: time="2025-09-16T04:52:10.137350419Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:52:10.139712 containerd[1551]: time="2025-09-16T04:52:10.139595018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:52:10.138019 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:52:10.142530 containerd[1551]: time="2025-09-16T04:52:10.140137766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:52:10.142530 containerd[1551]: time="2025-09-16T04:52:10.141457726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:52:10.142530 containerd[1551]: time="2025-09-16T04:52:10.141496686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:52:10.142530 containerd[1551]: time="2025-09-16T04:52:10.141538338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:52:10.142530 containerd[1551]: time="2025-09-16T04:52:10.141561853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:52:10.142530 containerd[1551]: time="2025-09-16T04:52:10.141591303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:52:10.142530 containerd[1551]: time="2025-09-16T04:52:10.142297663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:52:10.142530 containerd[1551]: time="2025-09-16T04:52:10.142331007Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:52:10.142530 containerd[1551]: time="2025-09-16T04:52:10.142350725Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:52:10.148218 containerd[1551]: time="2025-09-16T04:52:10.144324786Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:52:10.148218 containerd[1551]: time="2025-09-16T04:52:10.144367467Z" level=info msg="Start snapshots syncer" Sep 16 04:52:10.148218 containerd[1551]: time="2025-09-16T04:52:10.145491548Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:52:10.148359 containerd[1551]: time="2025-09-16T04:52:10.147172369Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:52:10.148359 containerd[1551]: time="2025-09-16T04:52:10.147257018Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:52:10.158356 containerd[1551]: time="2025-09-16T04:52:10.156805041Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:52:10.158356 containerd[1551]: time="2025-09-16T04:52:10.157483773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:52:10.158356 containerd[1551]: time="2025-09-16T04:52:10.157537766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:52:10.158356 containerd[1551]: time="2025-09-16T04:52:10.157559657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:52:10.158356 containerd[1551]: time="2025-09-16T04:52:10.157579969Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:52:10.158356 containerd[1551]: time="2025-09-16T04:52:10.157606338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:52:10.158356 containerd[1551]: time="2025-09-16T04:52:10.157625544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:52:10.158356 containerd[1551]: time="2025-09-16T04:52:10.157644223Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:52:10.158356 containerd[1551]: time="2025-09-16T04:52:10.157686547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:52:10.158356 containerd[1551]: time="2025-09-16T04:52:10.157706317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:52:10.158356 containerd[1551]: time="2025-09-16T04:52:10.157723721Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160459465Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160571951Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160597505Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160618005Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160634186Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160653031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160672474Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160698722Z" level=info msg="runtime interface created" Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160708673Z" level=info msg="created NRI interface" Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160730199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160765431Z" level=info msg="Connect containerd service" Sep 16 04:52:10.161935 containerd[1551]: time="2025-09-16T04:52:10.160814369Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:52:10.164414 containerd[1551]: time="2025-09-16T04:52:10.164083503Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:52:10.277803 polkitd[1637]: Started polkitd version 126 Sep 16 04:52:10.299272 polkitd[1637]: Loading rules from directory /etc/polkit-1/rules.d Sep 16 04:52:10.300002 polkitd[1637]: Loading rules from directory /run/polkit-1/rules.d Sep 16 04:52:10.300071 polkitd[1637]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 04:52:10.303616 polkitd[1637]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 16 04:52:10.303676 polkitd[1637]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 04:52:10.303738 polkitd[1637]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 16 04:52:10.306812 polkitd[1637]: Finished loading, compiling and executing 2 rules Sep 16 04:52:10.308126 systemd[1]: Started polkit.service - Authorization Manager. Sep 16 04:52:10.311128 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 16 04:52:10.312648 polkitd[1637]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 16 04:52:10.415233 systemd-resolved[1443]: System hostname changed to 'ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7'. Sep 16 04:52:10.415502 systemd-hostnamed[1604]: Hostname set to (transient) Sep 16 04:52:10.512702 containerd[1551]: time="2025-09-16T04:52:10.512637766Z" level=info msg="Start subscribing containerd event" Sep 16 04:52:10.513328 containerd[1551]: time="2025-09-16T04:52:10.512889412Z" level=info msg="Start recovering state" Sep 16 04:52:10.513328 containerd[1551]: time="2025-09-16T04:52:10.513084483Z" level=info msg="Start event monitor" Sep 16 04:52:10.513328 containerd[1551]: time="2025-09-16T04:52:10.513104170Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:52:10.513328 containerd[1551]: time="2025-09-16T04:52:10.513127026Z" level=info msg="Start streaming server" Sep 16 04:52:10.513328 containerd[1551]: time="2025-09-16T04:52:10.513149018Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:52:10.513328 containerd[1551]: time="2025-09-16T04:52:10.513164585Z" level=info msg="runtime interface starting up..." Sep 16 04:52:10.513328 containerd[1551]: time="2025-09-16T04:52:10.513174618Z" level=info msg="starting plugins..." Sep 16 04:52:10.513328 containerd[1551]: time="2025-09-16T04:52:10.513193305Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:52:10.518302 containerd[1551]: time="2025-09-16T04:52:10.517755883Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:52:10.518302 containerd[1551]: time="2025-09-16T04:52:10.517851893Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:52:10.518302 containerd[1551]: time="2025-09-16T04:52:10.517950793Z" level=info msg="containerd successfully booted in 0.579488s" Sep 16 04:52:10.518088 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:52:10.542176 sshd[1638]: Accepted publickey for core from 139.178.68.195 port 45574 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:52:10.548066 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:10.567738 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:52:10.579188 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:52:10.615650 systemd-logind[1532]: New session 1 of user core. Sep 16 04:52:10.629772 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:52:10.648872 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:52:10.691869 (systemd)[1682]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:52:10.701198 systemd-logind[1532]: New session c1 of user core. Sep 16 04:52:10.885453 tar[1548]: linux-amd64/LICENSE Sep 16 04:52:10.885453 tar[1548]: linux-amd64/README.md Sep 16 04:52:10.939209 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:52:11.019632 instance-setup[1603]: INFO Running google_set_multiqueue. Sep 16 04:52:11.049021 systemd[1682]: Queued start job for default target default.target. Sep 16 04:52:11.053177 instance-setup[1603]: INFO Set channels for eth0 to 2. Sep 16 04:52:11.056080 systemd[1682]: Created slice app.slice - User Application Slice. Sep 16 04:52:11.056135 systemd[1682]: Reached target paths.target - Paths. Sep 16 04:52:11.056336 systemd[1682]: Reached target timers.target - Timers. Sep 16 04:52:11.058935 instance-setup[1603]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Sep 16 04:52:11.063334 instance-setup[1603]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Sep 16 04:52:11.063680 instance-setup[1603]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Sep 16 04:52:11.063935 systemd[1682]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:52:11.067526 instance-setup[1603]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Sep 16 04:52:11.068548 instance-setup[1603]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Sep 16 04:52:11.073375 instance-setup[1603]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Sep 16 04:52:11.073457 instance-setup[1603]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Sep 16 04:52:11.074902 instance-setup[1603]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Sep 16 04:52:11.089659 instance-setup[1603]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 16 04:52:11.093983 instance-setup[1603]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 16 04:52:11.096425 instance-setup[1603]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 16 04:52:11.096490 instance-setup[1603]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 16 04:52:11.104593 systemd[1682]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:52:11.104800 systemd[1682]: Reached target sockets.target - Sockets. Sep 16 04:52:11.104892 systemd[1682]: Reached target basic.target - Basic System. Sep 16 04:52:11.104969 systemd[1682]: Reached target default.target - Main User Target. Sep 16 04:52:11.105027 systemd[1682]: Startup finished in 381ms. Sep 16 04:52:11.105052 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:52:11.121742 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:52:11.126005 init.sh[1602]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 16 04:52:11.343375 startup-script[1721]: INFO Starting startup scripts. Sep 16 04:52:11.350289 startup-script[1721]: INFO No startup scripts found in metadata. Sep 16 04:52:11.350370 startup-script[1721]: INFO Finished running startup scripts. Sep 16 04:52:11.366853 systemd[1]: Started sshd@1-10.128.0.59:22-139.178.68.195:46378.service - OpenSSH per-connection server daemon (139.178.68.195:46378). Sep 16 04:52:11.383537 init.sh[1602]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 16 04:52:11.383537 init.sh[1602]: + daemon_pids=() Sep 16 04:52:11.383537 init.sh[1602]: + for d in accounts clock_skew network Sep 16 04:52:11.383759 init.sh[1602]: + daemon_pids+=($!) Sep 16 04:52:11.383803 init.sh[1602]: + for d in accounts clock_skew network Sep 16 04:52:11.384262 init.sh[1602]: + daemon_pids+=($!) Sep 16 04:52:11.384350 init.sh[1728]: + /usr/bin/google_accounts_daemon Sep 16 04:52:11.384720 init.sh[1602]: + for d in accounts clock_skew network Sep 16 04:52:11.384720 init.sh[1602]: + daemon_pids+=($!) Sep 16 04:52:11.384720 init.sh[1602]: + NOTIFY_SOCKET=/run/systemd/notify Sep 16 04:52:11.384720 init.sh[1602]: + /usr/bin/systemd-notify --ready Sep 16 04:52:11.387409 init.sh[1729]: + /usr/bin/google_clock_skew_daemon Sep 16 04:52:11.387763 init.sh[1730]: + /usr/bin/google_network_daemon Sep 16 04:52:11.406855 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 16 04:52:11.422561 init.sh[1602]: + wait -n 1728 1729 1730 Sep 16 04:52:11.744102 sshd[1727]: Accepted publickey for core from 139.178.68.195 port 46378 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:52:11.746137 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:11.763176 systemd-logind[1532]: New session 2 of user core. Sep 16 04:52:11.766634 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:52:11.839696 google-clock-skew[1729]: INFO Starting Google Clock Skew daemon. Sep 16 04:52:11.850153 google-clock-skew[1729]: INFO Clock drift token has changed: 0. Sep 16 04:52:11.860270 google-networking[1730]: INFO Starting Google Networking daemon. Sep 16 04:52:12.000826 systemd-resolved[1443]: Clock change detected. Flushing caches. Sep 16 04:52:12.002946 google-clock-skew[1729]: INFO Synced system time with hardware clock. Sep 16 04:52:12.056681 groupadd[1743]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 16 04:52:12.061739 groupadd[1743]: group added to /etc/gshadow: name=google-sudoers Sep 16 04:52:12.077358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:52:12.089912 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:52:12.092179 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:52:12.099133 systemd[1]: Startup finished in 3.651s (kernel) + 11.454s (initrd) + 9.730s (userspace) = 24.836s. Sep 16 04:52:12.106758 sshd[1740]: Connection closed by 139.178.68.195 port 46378 Sep 16 04:52:12.106934 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:12.130272 groupadd[1743]: new group: name=google-sudoers, GID=1000 Sep 16 04:52:12.138118 systemd[1]: sshd@1-10.128.0.59:22-139.178.68.195:46378.service: Deactivated successfully. Sep 16 04:52:12.143776 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:52:12.156950 systemd-logind[1532]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:52:12.173599 systemd[1]: Started sshd@2-10.128.0.59:22-139.178.68.195:46386.service - OpenSSH per-connection server daemon (139.178.68.195:46386). Sep 16 04:52:12.176787 systemd-logind[1532]: Removed session 2. Sep 16 04:52:12.195996 google-accounts[1728]: INFO Starting Google Accounts daemon. Sep 16 04:52:12.209708 google-accounts[1728]: WARNING OS Login not installed. Sep 16 04:52:12.211708 google-accounts[1728]: INFO Creating a new user account for 0. Sep 16 04:52:12.221059 init.sh[1767]: useradd: invalid user name '0': use --badname to ignore Sep 16 04:52:12.221436 google-accounts[1728]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 16 04:52:12.501437 sshd[1765]: Accepted publickey for core from 139.178.68.195 port 46386 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:52:12.503993 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:12.511675 systemd-logind[1532]: New session 3 of user core. Sep 16 04:52:12.517863 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:52:12.714459 sshd[1775]: Connection closed by 139.178.68.195 port 46386 Sep 16 04:52:12.716545 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:12.722995 systemd[1]: sshd@2-10.128.0.59:22-139.178.68.195:46386.service: Deactivated successfully. Sep 16 04:52:12.726916 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:52:12.730601 systemd-logind[1532]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:52:12.733298 systemd-logind[1532]: Removed session 3. Sep 16 04:52:13.008367 kubelet[1753]: E0916 04:52:13.008231 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:52:13.011878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:52:13.012160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:52:13.012790 systemd[1]: kubelet.service: Consumed 1.307s CPU time, 264.2M memory peak. Sep 16 04:52:22.773007 systemd[1]: Started sshd@3-10.128.0.59:22-139.178.68.195:44274.service - OpenSSH per-connection server daemon (139.178.68.195:44274). Sep 16 04:52:23.025412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:52:23.030381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:52:23.087800 sshd[1783]: Accepted publickey for core from 139.178.68.195 port 44274 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:52:23.088722 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:23.097061 systemd-logind[1532]: New session 4 of user core. Sep 16 04:52:23.104804 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:52:23.305404 sshd[1789]: Connection closed by 139.178.68.195 port 44274 Sep 16 04:52:23.306564 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:23.312324 systemd[1]: sshd@3-10.128.0.59:22-139.178.68.195:44274.service: Deactivated successfully. Sep 16 04:52:23.314955 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:52:23.316221 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:52:23.318395 systemd-logind[1532]: Removed session 4. Sep 16 04:52:23.362027 systemd[1]: Started sshd@4-10.128.0.59:22-139.178.68.195:44280.service - OpenSSH per-connection server daemon (139.178.68.195:44280). Sep 16 04:52:23.403825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:52:23.419160 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:52:23.472900 kubelet[1803]: E0916 04:52:23.472829 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:52:23.477579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:52:23.477835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:52:23.478396 systemd[1]: kubelet.service: Consumed 205ms CPU time, 108.7M memory peak. Sep 16 04:52:23.668736 sshd[1795]: Accepted publickey for core from 139.178.68.195 port 44280 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:52:23.670414 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:23.677683 systemd-logind[1532]: New session 5 of user core. Sep 16 04:52:23.684828 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:52:23.878249 sshd[1811]: Connection closed by 139.178.68.195 port 44280 Sep 16 04:52:23.879089 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:23.885121 systemd[1]: sshd@4-10.128.0.59:22-139.178.68.195:44280.service: Deactivated successfully. Sep 16 04:52:23.887458 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:52:23.888731 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:52:23.890796 systemd-logind[1532]: Removed session 5. Sep 16 04:52:23.936068 systemd[1]: Started sshd@5-10.128.0.59:22-139.178.68.195:44292.service - OpenSSH per-connection server daemon (139.178.68.195:44292). Sep 16 04:52:24.243365 sshd[1817]: Accepted publickey for core from 139.178.68.195 port 44292 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:52:24.245099 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:24.252679 systemd-logind[1532]: New session 6 of user core. Sep 16 04:52:24.258923 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:52:24.456354 sshd[1820]: Connection closed by 139.178.68.195 port 44292 Sep 16 04:52:24.456910 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:24.462998 systemd[1]: sshd@5-10.128.0.59:22-139.178.68.195:44292.service: Deactivated successfully. Sep 16 04:52:24.465336 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:52:24.466661 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:52:24.468764 systemd-logind[1532]: Removed session 6. Sep 16 04:52:24.513950 systemd[1]: Started sshd@6-10.128.0.59:22-139.178.68.195:44304.service - OpenSSH per-connection server daemon (139.178.68.195:44304). Sep 16 04:52:24.822107 sshd[1826]: Accepted publickey for core from 139.178.68.195 port 44304 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:52:24.823911 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:24.831245 systemd-logind[1532]: New session 7 of user core. Sep 16 04:52:24.836849 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:52:25.019622 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:52:25.020167 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:52:25.037015 sudo[1830]: pam_unix(sudo:session): session closed for user root Sep 16 04:52:25.079588 sshd[1829]: Connection closed by 139.178.68.195 port 44304 Sep 16 04:52:25.081179 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:25.086434 systemd[1]: sshd@6-10.128.0.59:22-139.178.68.195:44304.service: Deactivated successfully. Sep 16 04:52:25.088938 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:52:25.091734 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:52:25.093276 systemd-logind[1532]: Removed session 7. Sep 16 04:52:25.137281 systemd[1]: Started sshd@7-10.128.0.59:22-139.178.68.195:44306.service - OpenSSH per-connection server daemon (139.178.68.195:44306). Sep 16 04:52:25.442979 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 44306 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:52:25.444838 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:25.452534 systemd-logind[1532]: New session 8 of user core. Sep 16 04:52:25.454842 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:52:25.623132 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:52:25.623776 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:52:25.631156 sudo[1841]: pam_unix(sudo:session): session closed for user root Sep 16 04:52:25.644948 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:52:25.645431 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:52:25.658398 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:52:25.704532 augenrules[1863]: No rules Sep 16 04:52:25.705270 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:52:25.705584 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:52:25.706941 sudo[1840]: pam_unix(sudo:session): session closed for user root Sep 16 04:52:25.750002 sshd[1839]: Connection closed by 139.178.68.195 port 44306 Sep 16 04:52:25.750967 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:25.756823 systemd[1]: sshd@7-10.128.0.59:22-139.178.68.195:44306.service: Deactivated successfully. Sep 16 04:52:25.759517 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:52:25.760856 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:52:25.762999 systemd-logind[1532]: Removed session 8. Sep 16 04:52:25.803432 systemd[1]: Started sshd@8-10.128.0.59:22-139.178.68.195:44308.service - OpenSSH per-connection server daemon (139.178.68.195:44308). Sep 16 04:52:26.102594 sshd[1872]: Accepted publickey for core from 139.178.68.195 port 44308 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:52:26.104266 sshd-session[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:52:26.111688 systemd-logind[1532]: New session 9 of user core. Sep 16 04:52:26.118836 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:52:26.281629 sudo[1876]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:52:26.282127 sudo[1876]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:52:26.309786 systemd[1]: Started sshd@9-10.128.0.59:22-95.105.56.83:55318.service - OpenSSH per-connection server daemon (95.105.56.83:55318). Sep 16 04:52:26.442122 sshd[1882]: Connection closed by 95.105.56.83 port 55318 Sep 16 04:52:26.442311 systemd[1]: sshd@9-10.128.0.59:22-95.105.56.83:55318.service: Deactivated successfully. Sep 16 04:52:26.784191 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:52:26.807328 (dockerd)[1899]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:52:27.159375 dockerd[1899]: time="2025-09-16T04:52:27.159058044Z" level=info msg="Starting up" Sep 16 04:52:27.161823 dockerd[1899]: time="2025-09-16T04:52:27.161776416Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:52:27.178033 dockerd[1899]: time="2025-09-16T04:52:27.177954180Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:52:27.204391 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3588697480-merged.mount: Deactivated successfully. Sep 16 04:52:27.218787 systemd[1]: var-lib-docker-metacopy\x2dcheck2094600161-merged.mount: Deactivated successfully. Sep 16 04:52:27.239971 dockerd[1899]: time="2025-09-16T04:52:27.239919035Z" level=info msg="Loading containers: start." Sep 16 04:52:27.260641 kernel: Initializing XFRM netlink socket Sep 16 04:52:27.620119 systemd-networkd[1440]: docker0: Link UP Sep 16 04:52:27.625904 dockerd[1899]: time="2025-09-16T04:52:27.625845114Z" level=info msg="Loading containers: done." Sep 16 04:52:27.644352 dockerd[1899]: time="2025-09-16T04:52:27.644290603Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:52:27.644565 dockerd[1899]: time="2025-09-16T04:52:27.644399814Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:52:27.644565 dockerd[1899]: time="2025-09-16T04:52:27.644523949Z" level=info msg="Initializing buildkit" Sep 16 04:52:27.675932 dockerd[1899]: time="2025-09-16T04:52:27.675866643Z" level=info msg="Completed buildkit initialization" Sep 16 04:52:27.685777 dockerd[1899]: time="2025-09-16T04:52:27.685694250Z" level=info msg="Daemon has completed initialization" Sep 16 04:52:27.685944 dockerd[1899]: time="2025-09-16T04:52:27.685774730Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:52:27.686403 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:52:28.196261 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4170720155-merged.mount: Deactivated successfully. Sep 16 04:52:28.595575 containerd[1551]: time="2025-09-16T04:52:28.595514123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 16 04:52:29.118425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862900732.mount: Deactivated successfully. Sep 16 04:52:30.823006 containerd[1551]: time="2025-09-16T04:52:30.822933834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:30.824569 containerd[1551]: time="2025-09-16T04:52:30.824434967Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28124707" Sep 16 04:52:30.826342 containerd[1551]: time="2025-09-16T04:52:30.825722255Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:30.829064 containerd[1551]: time="2025-09-16T04:52:30.829021460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:30.830531 containerd[1551]: time="2025-09-16T04:52:30.830478533Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.23491178s" Sep 16 04:52:30.830721 containerd[1551]: time="2025-09-16T04:52:30.830693810Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 16 04:52:30.831459 containerd[1551]: time="2025-09-16T04:52:30.831431856Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 16 04:52:32.468707 containerd[1551]: time="2025-09-16T04:52:32.468639521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:32.470125 containerd[1551]: time="2025-09-16T04:52:32.470039607Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24718566" Sep 16 04:52:32.471640 containerd[1551]: time="2025-09-16T04:52:32.471115025Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:32.474222 containerd[1551]: time="2025-09-16T04:52:32.474180741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:32.475645 containerd[1551]: time="2025-09-16T04:52:32.475588514Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.643997366s" Sep 16 04:52:32.475797 containerd[1551]: time="2025-09-16T04:52:32.475772887Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 16 04:52:32.476422 containerd[1551]: time="2025-09-16T04:52:32.476363574Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 16 04:52:33.592288 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:52:33.600893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:52:33.919649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:52:33.932280 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:52:33.974951 containerd[1551]: time="2025-09-16T04:52:33.974876444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:33.978640 containerd[1551]: time="2025-09-16T04:52:33.978377075Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18789614" Sep 16 04:52:34.004284 kubelet[2181]: E0916 04:52:34.004210 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:52:34.007947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:52:34.008210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:52:34.009301 systemd[1]: kubelet.service: Consumed 236ms CPU time, 110.5M memory peak. Sep 16 04:52:34.074577 containerd[1551]: time="2025-09-16T04:52:34.074289407Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:34.120538 containerd[1551]: time="2025-09-16T04:52:34.120455425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:34.123262 containerd[1551]: time="2025-09-16T04:52:34.123032718Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.646345472s" Sep 16 04:52:34.123262 containerd[1551]: time="2025-09-16T04:52:34.123087344Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 16 04:52:34.124064 containerd[1551]: time="2025-09-16T04:52:34.123917533Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 16 04:52:35.263189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592359074.mount: Deactivated successfully. Sep 16 04:52:35.931912 containerd[1551]: time="2025-09-16T04:52:35.931842416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:35.933409 containerd[1551]: time="2025-09-16T04:52:35.933128655Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30412147" Sep 16 04:52:35.934658 containerd[1551]: time="2025-09-16T04:52:35.934592043Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:35.938932 containerd[1551]: time="2025-09-16T04:52:35.937569324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:35.938932 containerd[1551]: time="2025-09-16T04:52:35.938425511Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.814147215s" Sep 16 04:52:35.938932 containerd[1551]: time="2025-09-16T04:52:35.938468149Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 16 04:52:35.939465 containerd[1551]: time="2025-09-16T04:52:35.939421516Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:52:36.376047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288735760.mount: Deactivated successfully. Sep 16 04:52:37.586044 containerd[1551]: time="2025-09-16T04:52:37.585968250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:37.587668 containerd[1551]: time="2025-09-16T04:52:37.587436153Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Sep 16 04:52:37.588957 containerd[1551]: time="2025-09-16T04:52:37.588912700Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:37.592418 containerd[1551]: time="2025-09-16T04:52:37.592365325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:37.593996 containerd[1551]: time="2025-09-16T04:52:37.593811200Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.654350354s" Sep 16 04:52:37.593996 containerd[1551]: time="2025-09-16T04:52:37.593856676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 16 04:52:37.595046 containerd[1551]: time="2025-09-16T04:52:37.595011947Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:52:37.984338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221788915.mount: Deactivated successfully. Sep 16 04:52:37.990901 containerd[1551]: time="2025-09-16T04:52:37.990836893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:52:37.992139 containerd[1551]: time="2025-09-16T04:52:37.991871303Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 16 04:52:37.993387 containerd[1551]: time="2025-09-16T04:52:37.993342242Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:52:37.996179 containerd[1551]: time="2025-09-16T04:52:37.996135170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:52:37.997170 containerd[1551]: time="2025-09-16T04:52:37.997130857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 402.079072ms" Sep 16 04:52:37.997350 containerd[1551]: time="2025-09-16T04:52:37.997324079Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 04:52:37.998064 containerd[1551]: time="2025-09-16T04:52:37.998015320Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 16 04:52:38.423286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1313363653.mount: Deactivated successfully. Sep 16 04:52:40.560355 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 16 04:52:40.720366 containerd[1551]: time="2025-09-16T04:52:40.720281805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:40.721940 containerd[1551]: time="2025-09-16T04:52:40.721886709Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56918218" Sep 16 04:52:40.723449 containerd[1551]: time="2025-09-16T04:52:40.723379012Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:40.726772 containerd[1551]: time="2025-09-16T04:52:40.726706315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:52:40.728770 containerd[1551]: time="2025-09-16T04:52:40.728139377Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.730083626s" Sep 16 04:52:40.728770 containerd[1551]: time="2025-09-16T04:52:40.728186723Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 16 04:52:44.092241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 16 04:52:44.095897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:52:44.475435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:52:44.489326 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:52:44.562193 kubelet[2336]: E0916 04:52:44.562112 2336 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:52:44.564949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:52:44.565195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:52:44.565988 systemd[1]: kubelet.service: Consumed 250ms CPU time, 110.3M memory peak. Sep 16 04:52:44.699080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:52:44.699701 systemd[1]: kubelet.service: Consumed 250ms CPU time, 110.3M memory peak. Sep 16 04:52:44.702993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:52:44.759252 systemd[1]: Reload requested from client PID 2351 ('systemctl') (unit session-9.scope)... Sep 16 04:52:44.759467 systemd[1]: Reloading... Sep 16 04:52:44.934669 zram_generator::config[2395]: No configuration found. Sep 16 04:52:45.285283 systemd[1]: Reloading finished in 524 ms. Sep 16 04:52:45.411953 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:52:45.412100 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:52:45.412521 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:52:45.412595 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.3M memory peak. Sep 16 04:52:45.417560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:52:45.929769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:52:45.946271 (kubelet)[2445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:52:46.009839 kubelet[2445]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:52:46.009839 kubelet[2445]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:52:46.009839 kubelet[2445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:52:46.010382 kubelet[2445]: I0916 04:52:46.009923 2445 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:52:46.939265 kubelet[2445]: I0916 04:52:46.939199 2445 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:52:46.939265 kubelet[2445]: I0916 04:52:46.939241 2445 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:52:46.939658 kubelet[2445]: I0916 04:52:46.939632 2445 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:52:46.967751 kubelet[2445]: E0916 04:52:46.967685 2445 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:52:46.976708 kubelet[2445]: I0916 04:52:46.976299 2445 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:52:46.986696 kubelet[2445]: I0916 04:52:46.986664 2445 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:52:46.992353 kubelet[2445]: I0916 04:52:46.992299 2445 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:52:46.993697 kubelet[2445]: I0916 04:52:46.993657 2445 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:52:46.993989 kubelet[2445]: I0916 04:52:46.993943 2445 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:52:46.994244 kubelet[2445]: I0916 04:52:46.993989 2445 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:52:46.994455 kubelet[2445]: I0916 04:52:46.994255 2445 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:52:46.994455 kubelet[2445]: I0916 04:52:46.994272 2445 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:52:46.994455 kubelet[2445]: I0916 04:52:46.994421 2445 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:52:46.999701 kubelet[2445]: I0916 04:52:46.999095 2445 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:52:46.999701 kubelet[2445]: I0916 04:52:46.999146 2445 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:52:46.999701 kubelet[2445]: I0916 04:52:46.999204 2445 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:52:46.999701 kubelet[2445]: I0916 04:52:46.999237 2445 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:52:47.011284 kubelet[2445]: W0916 04:52:47.010885 2445 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7&limit=500&resourceVersion=0": dial tcp 10.128.0.59:6443: connect: connection refused Sep 16 04:52:47.011284 kubelet[2445]: E0916 04:52:47.010992 2445 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7&limit=500&resourceVersion=0\": dial tcp 10.128.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:52:47.011974 kubelet[2445]: I0916 04:52:47.011505 2445 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:52:47.012385 kubelet[2445]: I0916 04:52:47.012335 2445 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:52:47.013561 kubelet[2445]: W0916 04:52:47.013291 2445 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.59:6443: connect: connection refused Sep 16 04:52:47.013561 kubelet[2445]: E0916 04:52:47.013371 2445 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:52:47.013561 kubelet[2445]: W0916 04:52:47.013473 2445 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:52:47.017002 kubelet[2445]: I0916 04:52:47.016976 2445 server.go:1274] "Started kubelet" Sep 16 04:52:47.018600 kubelet[2445]: I0916 04:52:47.018576 2445 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:52:47.027701 kubelet[2445]: I0916 04:52:47.027276 2445 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:52:47.029789 kubelet[2445]: E0916 04:52:47.027753 2445 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7.1865aa3355c2dce4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7,UID:ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7,},FirstTimestamp:2025-09-16 04:52:47.016934628 +0000 UTC m=+1.065034508,LastTimestamp:2025-09-16 04:52:47.016934628 +0000 UTC m=+1.065034508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7,}" Sep 16 04:52:47.037830 kubelet[2445]: I0916 04:52:47.037789 2445 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:52:47.041729 kubelet[2445]: E0916 04:52:47.041669 2445 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:52:47.041898 kubelet[2445]: I0916 04:52:47.041786 2445 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:52:47.043390 kubelet[2445]: I0916 04:52:47.042104 2445 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:52:47.043390 kubelet[2445]: I0916 04:52:47.042446 2445 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:52:47.044636 kubelet[2445]: I0916 04:52:47.043696 2445 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:52:47.045044 kubelet[2445]: E0916 04:52:47.045019 2445 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" not found" Sep 16 04:52:47.046903 kubelet[2445]: E0916 04:52:47.046856 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7?timeout=10s\": dial tcp 10.128.0.59:6443: connect: connection refused" interval="200ms" Sep 16 04:52:47.047012 kubelet[2445]: I0916 04:52:47.046948 2445 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:52:47.049069 kubelet[2445]: I0916 04:52:47.049025 2445 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:52:47.049207 kubelet[2445]: I0916 04:52:47.049174 2445 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:52:47.049629 kubelet[2445]: W0916 04:52:47.049549 2445 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.59:6443: connect: connection refused Sep 16 04:52:47.049730 kubelet[2445]: E0916 04:52:47.049646 2445 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:52:47.050971 kubelet[2445]: I0916 04:52:47.050952 2445 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:52:47.051877 kubelet[2445]: I0916 04:52:47.051847 2445 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:52:47.067059 kubelet[2445]: I0916 04:52:47.066826 2445 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:52:47.070064 kubelet[2445]: I0916 04:52:47.070024 2445 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:52:47.070064 kubelet[2445]: I0916 04:52:47.070067 2445 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:52:47.070245 kubelet[2445]: I0916 04:52:47.070101 2445 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:52:47.070245 kubelet[2445]: E0916 04:52:47.070164 2445 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:52:47.080470 kubelet[2445]: W0916 04:52:47.080304 2445 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.59:6443: connect: connection refused Sep 16 04:52:47.080470 kubelet[2445]: E0916 04:52:47.080389 2445 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:52:47.101449 kubelet[2445]: I0916 04:52:47.101411 2445 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:52:47.101669 kubelet[2445]: I0916 04:52:47.101510 2445 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:52:47.101669 kubelet[2445]: I0916 04:52:47.101567 2445 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:52:47.104822 kubelet[2445]: I0916 04:52:47.104769 2445 policy_none.go:49] "None policy: Start" Sep 16 04:52:47.105967 kubelet[2445]: I0916 04:52:47.105918 2445 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:52:47.105967 kubelet[2445]: I0916 04:52:47.105955 2445 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:52:47.116130 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:52:47.130737 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:52:47.145858 kubelet[2445]: E0916 04:52:47.145797 2445 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" not found" Sep 16 04:52:47.154758 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:52:47.157624 kubelet[2445]: I0916 04:52:47.157552 2445 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:52:47.157998 kubelet[2445]: I0916 04:52:47.157875 2445 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:52:47.157998 kubelet[2445]: I0916 04:52:47.157922 2445 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:52:47.158450 kubelet[2445]: I0916 04:52:47.158417 2445 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:52:47.162035 kubelet[2445]: E0916 04:52:47.161991 2445 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" not found" Sep 16 04:52:47.200911 systemd[1]: Created slice kubepods-burstable-podec5f817f9e9248e36d7950fa3a04d645.slice - libcontainer container kubepods-burstable-podec5f817f9e9248e36d7950fa3a04d645.slice. Sep 16 04:52:47.228263 systemd[1]: Created slice kubepods-burstable-podb381ced9437cd2e06c2ee22d2579d84f.slice - libcontainer container kubepods-burstable-podb381ced9437cd2e06c2ee22d2579d84f.slice. Sep 16 04:52:47.246575 systemd[1]: Created slice kubepods-burstable-podda2717183967ca60c49d6d1e27e42efd.slice - libcontainer container kubepods-burstable-podda2717183967ca60c49d6d1e27e42efd.slice. Sep 16 04:52:47.248442 kubelet[2445]: E0916 04:52:47.248390 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7?timeout=10s\": dial tcp 10.128.0.59:6443: connect: connection refused" interval="400ms" Sep 16 04:52:47.253384 kubelet[2445]: I0916 04:52:47.253018 2445 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec5f817f9e9248e36d7950fa3a04d645-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"ec5f817f9e9248e36d7950fa3a04d645\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.253384 kubelet[2445]: I0916 04:52:47.253074 2445 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ec5f817f9e9248e36d7950fa3a04d645-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"ec5f817f9e9248e36d7950fa3a04d645\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.253384 kubelet[2445]: I0916 04:52:47.253120 2445 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec5f817f9e9248e36d7950fa3a04d645-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"ec5f817f9e9248e36d7950fa3a04d645\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.253384 kubelet[2445]: I0916 04:52:47.253176 2445 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec5f817f9e9248e36d7950fa3a04d645-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"ec5f817f9e9248e36d7950fa3a04d645\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.253634 kubelet[2445]: I0916 04:52:47.253215 2445 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b381ced9437cd2e06c2ee22d2579d84f-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"b381ced9437cd2e06c2ee22d2579d84f\") " pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.253634 kubelet[2445]: I0916 04:52:47.253248 2445 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da2717183967ca60c49d6d1e27e42efd-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"da2717183967ca60c49d6d1e27e42efd\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.253634 kubelet[2445]: I0916 04:52:47.253272 2445 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da2717183967ca60c49d6d1e27e42efd-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"da2717183967ca60c49d6d1e27e42efd\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.253634 kubelet[2445]: I0916 04:52:47.253302 2445 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da2717183967ca60c49d6d1e27e42efd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"da2717183967ca60c49d6d1e27e42efd\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.253784 kubelet[2445]: I0916 04:52:47.253342 2445 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec5f817f9e9248e36d7950fa3a04d645-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"ec5f817f9e9248e36d7950fa3a04d645\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.263266 kubelet[2445]: I0916 04:52:47.263207 2445 kubelet_node_status.go:72] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.263798 kubelet[2445]: E0916 04:52:47.263762 2445 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.59:6443/api/v1/nodes\": dial tcp 10.128.0.59:6443: connect: connection refused" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.478506 kubelet[2445]: I0916 04:52:47.478356 2445 kubelet_node_status.go:72] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.479680 kubelet[2445]: E0916 04:52:47.479584 2445 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.59:6443/api/v1/nodes\": dial tcp 10.128.0.59:6443: connect: connection refused" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.523643 containerd[1551]: time="2025-09-16T04:52:47.523551841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7,Uid:ec5f817f9e9248e36d7950fa3a04d645,Namespace:kube-system,Attempt:0,}" Sep 16 04:52:47.543633 containerd[1551]: time="2025-09-16T04:52:47.543547733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7,Uid:b381ced9437cd2e06c2ee22d2579d84f,Namespace:kube-system,Attempt:0,}" Sep 16 04:52:47.557647 containerd[1551]: time="2025-09-16T04:52:47.556573634Z" level=info msg="connecting to shim 772d1eba8f5066c026fefb8a185d48ee73cba6eccb97f85de1de742868ca9eca" address="unix:///run/containerd/s/fbe61c09602685f782e76acdb5f082dcc8fd4111e7c8789c208f660de08d03b1" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:52:47.558193 containerd[1551]: time="2025-09-16T04:52:47.558156496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7,Uid:da2717183967ca60c49d6d1e27e42efd,Namespace:kube-system,Attempt:0,}" Sep 16 04:52:47.624171 containerd[1551]: time="2025-09-16T04:52:47.624117583Z" level=info msg="connecting to shim 60f6ff6cc15d523c9cc457d9cd61e397236537637e1c7b35c9510513309496be" address="unix:///run/containerd/s/274e496861e922720ee8d5eca245a3025e595594a6364bbdc16af13dcdd02a79" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:52:47.625359 systemd[1]: Started cri-containerd-772d1eba8f5066c026fefb8a185d48ee73cba6eccb97f85de1de742868ca9eca.scope - libcontainer container 772d1eba8f5066c026fefb8a185d48ee73cba6eccb97f85de1de742868ca9eca. Sep 16 04:52:47.631875 containerd[1551]: time="2025-09-16T04:52:47.631806558Z" level=info msg="connecting to shim cb1a4b5cd9c6d94c68058ec9dc79b32da3d68441afbb9d45b63906044b00c440" address="unix:///run/containerd/s/960132670515b948fca01cdfdc8013a6649fcd8c6dbbfaa81cce0b1bf5594171" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:52:47.652157 kubelet[2445]: E0916 04:52:47.650972 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7?timeout=10s\": dial tcp 10.128.0.59:6443: connect: connection refused" interval="800ms" Sep 16 04:52:47.696971 systemd[1]: Started cri-containerd-60f6ff6cc15d523c9cc457d9cd61e397236537637e1c7b35c9510513309496be.scope - libcontainer container 60f6ff6cc15d523c9cc457d9cd61e397236537637e1c7b35c9510513309496be. Sep 16 04:52:47.706745 systemd[1]: Started cri-containerd-cb1a4b5cd9c6d94c68058ec9dc79b32da3d68441afbb9d45b63906044b00c440.scope - libcontainer container cb1a4b5cd9c6d94c68058ec9dc79b32da3d68441afbb9d45b63906044b00c440. Sep 16 04:52:47.775744 containerd[1551]: time="2025-09-16T04:52:47.775516062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7,Uid:ec5f817f9e9248e36d7950fa3a04d645,Namespace:kube-system,Attempt:0,} returns sandbox id \"772d1eba8f5066c026fefb8a185d48ee73cba6eccb97f85de1de742868ca9eca\"" Sep 16 04:52:47.780277 kubelet[2445]: E0916 04:52:47.780204 2445 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d" Sep 16 04:52:47.785796 containerd[1551]: time="2025-09-16T04:52:47.785735985Z" level=info msg="CreateContainer within sandbox \"772d1eba8f5066c026fefb8a185d48ee73cba6eccb97f85de1de742868ca9eca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:52:47.810808 containerd[1551]: time="2025-09-16T04:52:47.810755249Z" level=info msg="Container c4ed3f0b88a4e99f6f8d6ab09d19c1da692d78573a8327ea39eeb0f6fd39e1af: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:47.833174 containerd[1551]: time="2025-09-16T04:52:47.832409988Z" level=info msg="CreateContainer within sandbox \"772d1eba8f5066c026fefb8a185d48ee73cba6eccb97f85de1de742868ca9eca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c4ed3f0b88a4e99f6f8d6ab09d19c1da692d78573a8327ea39eeb0f6fd39e1af\"" Sep 16 04:52:47.835801 containerd[1551]: time="2025-09-16T04:52:47.835600531Z" level=info msg="StartContainer for \"c4ed3f0b88a4e99f6f8d6ab09d19c1da692d78573a8327ea39eeb0f6fd39e1af\"" Sep 16 04:52:47.839521 containerd[1551]: time="2025-09-16T04:52:47.839470230Z" level=info msg="connecting to shim c4ed3f0b88a4e99f6f8d6ab09d19c1da692d78573a8327ea39eeb0f6fd39e1af" address="unix:///run/containerd/s/fbe61c09602685f782e76acdb5f082dcc8fd4111e7c8789c208f660de08d03b1" protocol=ttrpc version=3 Sep 16 04:52:47.841436 containerd[1551]: time="2025-09-16T04:52:47.841394362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7,Uid:da2717183967ca60c49d6d1e27e42efd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb1a4b5cd9c6d94c68058ec9dc79b32da3d68441afbb9d45b63906044b00c440\"" Sep 16 04:52:47.843483 kubelet[2445]: E0916 04:52:47.843123 2445 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f18" Sep 16 04:52:47.845299 containerd[1551]: time="2025-09-16T04:52:47.845261962Z" level=info msg="CreateContainer within sandbox \"cb1a4b5cd9c6d94c68058ec9dc79b32da3d68441afbb9d45b63906044b00c440\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:52:47.856629 kubelet[2445]: W0916 04:52:47.856468 2445 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7&limit=500&resourceVersion=0": dial tcp 10.128.0.59:6443: connect: connection refused Sep 16 04:52:47.857335 kubelet[2445]: E0916 04:52:47.856858 2445 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7&limit=500&resourceVersion=0\": dial tcp 10.128.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:52:47.861725 containerd[1551]: time="2025-09-16T04:52:47.861675896Z" level=info msg="Container b481d2118071d49c38bc3879d454c88e64e4335f3540821509613e7631944a7f: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:47.866671 containerd[1551]: time="2025-09-16T04:52:47.866524488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7,Uid:b381ced9437cd2e06c2ee22d2579d84f,Namespace:kube-system,Attempt:0,} returns sandbox id \"60f6ff6cc15d523c9cc457d9cd61e397236537637e1c7b35c9510513309496be\"" Sep 16 04:52:47.869686 kubelet[2445]: E0916 04:52:47.869473 2445 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-0-0-nightly-20250915-2100-4297d38a767f18" Sep 16 04:52:47.873205 containerd[1551]: time="2025-09-16T04:52:47.873167201Z" level=info msg="CreateContainer within sandbox \"60f6ff6cc15d523c9cc457d9cd61e397236537637e1c7b35c9510513309496be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:52:47.874040 containerd[1551]: time="2025-09-16T04:52:47.874004829Z" level=info msg="CreateContainer within sandbox \"cb1a4b5cd9c6d94c68058ec9dc79b32da3d68441afbb9d45b63906044b00c440\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b481d2118071d49c38bc3879d454c88e64e4335f3540821509613e7631944a7f\"" Sep 16 04:52:47.875668 containerd[1551]: time="2025-09-16T04:52:47.874725589Z" level=info msg="StartContainer for \"b481d2118071d49c38bc3879d454c88e64e4335f3540821509613e7631944a7f\"" Sep 16 04:52:47.876483 containerd[1551]: time="2025-09-16T04:52:47.876446607Z" level=info msg="connecting to shim b481d2118071d49c38bc3879d454c88e64e4335f3540821509613e7631944a7f" address="unix:///run/containerd/s/960132670515b948fca01cdfdc8013a6649fcd8c6dbbfaa81cce0b1bf5594171" protocol=ttrpc version=3 Sep 16 04:52:47.883345 systemd[1]: Started cri-containerd-c4ed3f0b88a4e99f6f8d6ab09d19c1da692d78573a8327ea39eeb0f6fd39e1af.scope - libcontainer container c4ed3f0b88a4e99f6f8d6ab09d19c1da692d78573a8327ea39eeb0f6fd39e1af. Sep 16 04:52:47.887938 kubelet[2445]: I0916 04:52:47.887873 2445 kubelet_node_status.go:72] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.888968 kubelet[2445]: E0916 04:52:47.888924 2445 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.59:6443/api/v1/nodes\": dial tcp 10.128.0.59:6443: connect: connection refused" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:47.895982 containerd[1551]: time="2025-09-16T04:52:47.895924302Z" level=info msg="Container 5ebaa0da1c3a28502f9367de55b7aae6d87ee574978bf5740b93d705c6ac5657: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:52:47.913988 containerd[1551]: time="2025-09-16T04:52:47.913463638Z" level=info msg="CreateContainer within sandbox \"60f6ff6cc15d523c9cc457d9cd61e397236537637e1c7b35c9510513309496be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ebaa0da1c3a28502f9367de55b7aae6d87ee574978bf5740b93d705c6ac5657\"" Sep 16 04:52:47.916717 containerd[1551]: time="2025-09-16T04:52:47.916680408Z" level=info msg="StartContainer for \"5ebaa0da1c3a28502f9367de55b7aae6d87ee574978bf5740b93d705c6ac5657\"" Sep 16 04:52:47.918706 containerd[1551]: time="2025-09-16T04:52:47.918670337Z" level=info msg="connecting to shim 5ebaa0da1c3a28502f9367de55b7aae6d87ee574978bf5740b93d705c6ac5657" address="unix:///run/containerd/s/274e496861e922720ee8d5eca245a3025e595594a6364bbdc16af13dcdd02a79" protocol=ttrpc version=3 Sep 16 04:52:47.919080 systemd[1]: Started cri-containerd-b481d2118071d49c38bc3879d454c88e64e4335f3540821509613e7631944a7f.scope - libcontainer container b481d2118071d49c38bc3879d454c88e64e4335f3540821509613e7631944a7f. Sep 16 04:52:47.974755 systemd[1]: Started cri-containerd-5ebaa0da1c3a28502f9367de55b7aae6d87ee574978bf5740b93d705c6ac5657.scope - libcontainer container 5ebaa0da1c3a28502f9367de55b7aae6d87ee574978bf5740b93d705c6ac5657. Sep 16 04:52:48.064884 containerd[1551]: time="2025-09-16T04:52:48.064715269Z" level=info msg="StartContainer for \"c4ed3f0b88a4e99f6f8d6ab09d19c1da692d78573a8327ea39eeb0f6fd39e1af\" returns successfully" Sep 16 04:52:48.075697 kubelet[2445]: W0916 04:52:48.075563 2445 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.59:6443: connect: connection refused Sep 16 04:52:48.076268 kubelet[2445]: E0916 04:52:48.075715 2445 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:52:48.078337 containerd[1551]: time="2025-09-16T04:52:48.078299992Z" level=info msg="StartContainer for \"b481d2118071d49c38bc3879d454c88e64e4335f3540821509613e7631944a7f\" returns successfully" Sep 16 04:52:48.105911 containerd[1551]: time="2025-09-16T04:52:48.105835810Z" level=info msg="StartContainer for \"5ebaa0da1c3a28502f9367de55b7aae6d87ee574978bf5740b93d705c6ac5657\" returns successfully" Sep 16 04:52:48.222979 kubelet[2445]: W0916 04:52:48.222844 2445 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.59:6443: connect: connection refused Sep 16 04:52:48.223167 kubelet[2445]: E0916 04:52:48.222986 2445 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.59:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:52:48.693891 kubelet[2445]: I0916 04:52:48.693854 2445 kubelet_node_status.go:72] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:52.168337 kubelet[2445]: E0916 04:52:52.168290 2445 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" not found" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:52.304733 kubelet[2445]: I0916 04:52:52.304673 2445 kubelet_node_status.go:75] "Successfully registered node" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:53.014261 kubelet[2445]: I0916 04:52:53.014148 2445 apiserver.go:52] "Watching apiserver" Sep 16 04:52:53.048101 kubelet[2445]: I0916 04:52:53.047972 2445 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:52:53.432205 kubelet[2445]: W0916 04:52:53.432157 2445 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 16 04:52:54.266415 update_engine[1534]: I20250916 04:52:54.266263 1534 update_attempter.cc:509] Updating boot flags... Sep 16 04:52:54.477903 systemd[1]: Reload requested from client PID 2736 ('systemctl') (unit session-9.scope)... Sep 16 04:52:54.477926 systemd[1]: Reloading... Sep 16 04:52:54.706686 zram_generator::config[2782]: No configuration found. Sep 16 04:52:55.108483 systemd[1]: Reloading finished in 629 ms. Sep 16 04:52:55.240786 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:52:55.269400 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:52:55.270572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:52:55.270874 systemd[1]: kubelet.service: Consumed 1.618s CPU time, 131.4M memory peak. Sep 16 04:52:55.278052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:52:55.674767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:52:55.689090 (kubelet)[2834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:52:55.764095 kubelet[2834]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:52:55.764095 kubelet[2834]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:52:55.764095 kubelet[2834]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:52:55.764741 kubelet[2834]: I0916 04:52:55.764155 2834 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:52:55.777285 kubelet[2834]: I0916 04:52:55.777200 2834 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:52:55.777285 kubelet[2834]: I0916 04:52:55.777241 2834 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:52:55.777669 kubelet[2834]: I0916 04:52:55.777598 2834 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:52:55.780253 kubelet[2834]: I0916 04:52:55.780212 2834 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:52:55.783880 kubelet[2834]: I0916 04:52:55.783740 2834 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:52:55.796164 kubelet[2834]: I0916 04:52:55.796119 2834 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:52:55.803943 kubelet[2834]: I0916 04:52:55.803889 2834 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:52:55.804471 kubelet[2834]: I0916 04:52:55.804414 2834 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:52:55.805939 kubelet[2834]: I0916 04:52:55.804689 2834 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:52:55.805939 kubelet[2834]: I0916 04:52:55.804742 2834 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:52:55.805939 kubelet[2834]: I0916 04:52:55.805026 2834 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:52:55.805939 kubelet[2834]: I0916 04:52:55.805044 2834 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:52:55.806277 kubelet[2834]: I0916 04:52:55.805097 2834 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:52:55.806277 kubelet[2834]: I0916 04:52:55.805290 2834 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:52:55.806277 kubelet[2834]: I0916 04:52:55.805310 2834 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:52:55.806789 kubelet[2834]: I0916 04:52:55.806744 2834 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:52:55.809730 kubelet[2834]: I0916 04:52:55.809662 2834 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:52:55.828141 kubelet[2834]: I0916 04:52:55.828097 2834 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:52:55.830067 kubelet[2834]: I0916 04:52:55.830010 2834 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:52:55.832152 kubelet[2834]: I0916 04:52:55.832109 2834 server.go:1274] "Started kubelet" Sep 16 04:52:55.836687 sudo[2849]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 16 04:52:55.837252 sudo[2849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 16 04:52:55.840346 kubelet[2834]: I0916 04:52:55.840312 2834 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:52:55.851197 kubelet[2834]: I0916 04:52:55.850579 2834 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:52:55.851197 kubelet[2834]: I0916 04:52:55.850729 2834 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:52:55.854891 kubelet[2834]: I0916 04:52:55.854855 2834 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:52:55.858757 kubelet[2834]: I0916 04:52:55.855487 2834 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:52:55.859931 kubelet[2834]: I0916 04:52:55.859763 2834 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:52:55.860893 kubelet[2834]: I0916 04:52:55.860857 2834 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:52:55.863529 kubelet[2834]: I0916 04:52:55.855983 2834 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:52:55.863833 kubelet[2834]: I0916 04:52:55.863815 2834 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:52:55.865480 kubelet[2834]: I0916 04:52:55.864824 2834 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:52:55.871129 kubelet[2834]: E0916 04:52:55.870856 2834 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:52:55.874523 kubelet[2834]: I0916 04:52:55.874498 2834 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:52:55.875149 kubelet[2834]: I0916 04:52:55.875018 2834 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:52:55.907168 kubelet[2834]: I0916 04:52:55.907098 2834 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:52:55.909791 kubelet[2834]: I0916 04:52:55.909746 2834 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:52:55.910478 kubelet[2834]: I0916 04:52:55.909985 2834 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:52:55.910478 kubelet[2834]: I0916 04:52:55.910048 2834 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:52:55.910478 kubelet[2834]: E0916 04:52:55.910159 2834 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:52:55.989976 kubelet[2834]: I0916 04:52:55.989830 2834 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:52:55.989976 kubelet[2834]: I0916 04:52:55.989864 2834 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:52:55.990441 kubelet[2834]: I0916 04:52:55.990217 2834 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:52:55.990654 kubelet[2834]: I0916 04:52:55.990633 2834 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:52:55.990836 kubelet[2834]: I0916 04:52:55.990720 2834 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:52:55.990836 kubelet[2834]: I0916 04:52:55.990755 2834 policy_none.go:49] "None policy: Start" Sep 16 04:52:55.993069 kubelet[2834]: I0916 04:52:55.992900 2834 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:52:55.993069 kubelet[2834]: I0916 04:52:55.992934 2834 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:52:55.993517 kubelet[2834]: I0916 04:52:55.993453 2834 state_mem.go:75] "Updated machine memory state" Sep 16 04:52:56.004256 kubelet[2834]: I0916 04:52:56.003884 2834 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:52:56.004256 kubelet[2834]: I0916 04:52:56.004110 2834 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:52:56.004256 kubelet[2834]: I0916 04:52:56.004126 2834 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:52:56.011521 kubelet[2834]: I0916 04:52:56.011254 2834 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:52:56.036019 kubelet[2834]: W0916 04:52:56.035055 2834 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 16 04:52:56.044471 kubelet[2834]: W0916 04:52:56.044064 2834 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 16 04:52:56.048305 kubelet[2834]: W0916 04:52:56.048269 2834 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 16 04:52:56.048578 kubelet[2834]: E0916 04:52:56.048551 2834 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" already exists" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.137642 kubelet[2834]: I0916 04:52:56.136766 2834 kubelet_node_status.go:72] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.157159 kubelet[2834]: I0916 04:52:56.156479 2834 kubelet_node_status.go:111] "Node was previously registered" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.158552 kubelet[2834]: I0916 04:52:56.157650 2834 kubelet_node_status.go:75] "Successfully registered node" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.165549 kubelet[2834]: I0916 04:52:56.165186 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da2717183967ca60c49d6d1e27e42efd-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"da2717183967ca60c49d6d1e27e42efd\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.165549 kubelet[2834]: I0916 04:52:56.165244 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da2717183967ca60c49d6d1e27e42efd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"da2717183967ca60c49d6d1e27e42efd\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.165821 kubelet[2834]: I0916 04:52:56.165690 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec5f817f9e9248e36d7950fa3a04d645-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"ec5f817f9e9248e36d7950fa3a04d645\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.165821 kubelet[2834]: I0916 04:52:56.165733 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ec5f817f9e9248e36d7950fa3a04d645-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"ec5f817f9e9248e36d7950fa3a04d645\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.165821 kubelet[2834]: I0916 04:52:56.165781 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec5f817f9e9248e36d7950fa3a04d645-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"ec5f817f9e9248e36d7950fa3a04d645\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.165821 kubelet[2834]: I0916 04:52:56.165813 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da2717183967ca60c49d6d1e27e42efd-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"da2717183967ca60c49d6d1e27e42efd\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.166037 kubelet[2834]: I0916 04:52:56.165841 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec5f817f9e9248e36d7950fa3a04d645-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"ec5f817f9e9248e36d7950fa3a04d645\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.166037 kubelet[2834]: I0916 04:52:56.165873 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec5f817f9e9248e36d7950fa3a04d645-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"ec5f817f9e9248e36d7950fa3a04d645\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.166037 kubelet[2834]: I0916 04:52:56.165904 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b381ced9437cd2e06c2ee22d2579d84f-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" (UID: \"b381ced9437cd2e06c2ee22d2579d84f\") " pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:56.507797 sudo[2849]: pam_unix(sudo:session): session closed for user root Sep 16 04:52:56.811416 kubelet[2834]: I0916 04:52:56.811269 2834 apiserver.go:52] "Watching apiserver" Sep 16 04:52:56.864271 kubelet[2834]: I0916 04:52:56.864221 2834 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:52:56.972636 kubelet[2834]: W0916 04:52:56.971945 2834 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 16 04:52:56.972636 kubelet[2834]: E0916 04:52:56.972052 2834 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" already exists" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" Sep 16 04:52:57.123008 kubelet[2834]: I0916 04:52:57.122906 2834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" podStartSLOduration=1.122851231 podStartE2EDuration="1.122851231s" podCreationTimestamp="2025-09-16 04:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:52:57.086170464 +0000 UTC m=+1.388822239" watchObservedRunningTime="2025-09-16 04:52:57.122851231 +0000 UTC m=+1.425502991" Sep 16 04:52:57.164634 kubelet[2834]: I0916 04:52:57.164464 2834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" podStartSLOduration=1.164437554 podStartE2EDuration="1.164437554s" podCreationTimestamp="2025-09-16 04:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:52:57.12377608 +0000 UTC m=+1.426427855" watchObservedRunningTime="2025-09-16 04:52:57.164437554 +0000 UTC m=+1.467089319" Sep 16 04:52:58.532221 sudo[1876]: pam_unix(sudo:session): session closed for user root Sep 16 04:52:58.574834 sshd[1875]: Connection closed by 139.178.68.195 port 44308 Sep 16 04:52:58.576855 sshd-session[1872]: pam_unix(sshd:session): session closed for user core Sep 16 04:52:58.583271 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:52:58.584025 systemd[1]: sshd@8-10.128.0.59:22-139.178.68.195:44308.service: Deactivated successfully. Sep 16 04:52:58.587854 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:52:58.588232 systemd[1]: session-9.scope: Consumed 6.795s CPU time, 269.4M memory peak. Sep 16 04:52:58.591354 systemd-logind[1532]: Removed session 9. Sep 16 04:52:59.296631 kubelet[2834]: I0916 04:52:59.296426 2834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" podStartSLOduration=6.296402135 podStartE2EDuration="6.296402135s" podCreationTimestamp="2025-09-16 04:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:52:57.164910718 +0000 UTC m=+1.467562492" watchObservedRunningTime="2025-09-16 04:52:59.296402135 +0000 UTC m=+3.599053923" Sep 16 04:53:00.797862 kubelet[2834]: I0916 04:53:00.797817 2834 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:53:00.800628 containerd[1551]: time="2025-09-16T04:53:00.799831891Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:53:00.801137 kubelet[2834]: I0916 04:53:00.800120 2834 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:53:01.503433 kubelet[2834]: I0916 04:53:01.503391 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c85505fa-ea65-4ff5-96fe-d6802531b0fe-xtables-lock\") pod \"kube-proxy-tqgn9\" (UID: \"c85505fa-ea65-4ff5-96fe-d6802531b0fe\") " pod="kube-system/kube-proxy-tqgn9" Sep 16 04:53:01.503584 kubelet[2834]: I0916 04:53:01.503443 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c85505fa-ea65-4ff5-96fe-d6802531b0fe-lib-modules\") pod \"kube-proxy-tqgn9\" (UID: \"c85505fa-ea65-4ff5-96fe-d6802531b0fe\") " pod="kube-system/kube-proxy-tqgn9" Sep 16 04:53:01.503584 kubelet[2834]: I0916 04:53:01.503476 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c85505fa-ea65-4ff5-96fe-d6802531b0fe-kube-proxy\") pod \"kube-proxy-tqgn9\" (UID: \"c85505fa-ea65-4ff5-96fe-d6802531b0fe\") " pod="kube-system/kube-proxy-tqgn9" Sep 16 04:53:01.503584 kubelet[2834]: I0916 04:53:01.503501 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svx8n\" (UniqueName: \"kubernetes.io/projected/c85505fa-ea65-4ff5-96fe-d6802531b0fe-kube-api-access-svx8n\") pod \"kube-proxy-tqgn9\" (UID: \"c85505fa-ea65-4ff5-96fe-d6802531b0fe\") " pod="kube-system/kube-proxy-tqgn9" Sep 16 04:53:01.508943 systemd[1]: Created slice kubepods-besteffort-podc85505fa_ea65_4ff5_96fe_d6802531b0fe.slice - libcontainer container kubepods-besteffort-podc85505fa_ea65_4ff5_96fe_d6802531b0fe.slice. Sep 16 04:53:01.550828 systemd[1]: Created slice kubepods-burstable-pod991ab742_1070_4287_bca7_0fce1631e07b.slice - libcontainer container kubepods-burstable-pod991ab742_1070_4287_bca7_0fce1631e07b.slice. Sep 16 04:53:01.604641 kubelet[2834]: I0916 04:53:01.604027 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-bpf-maps\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.604641 kubelet[2834]: I0916 04:53:01.604091 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/991ab742-1070-4287-bca7-0fce1631e07b-clustermesh-secrets\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.604641 kubelet[2834]: I0916 04:53:01.604133 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cni-path\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.604641 kubelet[2834]: I0916 04:53:01.604168 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-xtables-lock\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.604641 kubelet[2834]: I0916 04:53:01.604196 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cilium-run\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.604641 kubelet[2834]: I0916 04:53:01.604243 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cilium-cgroup\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.605099 kubelet[2834]: I0916 04:53:01.604313 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2kf9\" (UniqueName: \"kubernetes.io/projected/991ab742-1070-4287-bca7-0fce1631e07b-kube-api-access-p2kf9\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.605099 kubelet[2834]: I0916 04:53:01.604405 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-lib-modules\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.605099 kubelet[2834]: I0916 04:53:01.604441 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-host-proc-sys-net\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.605099 kubelet[2834]: I0916 04:53:01.604473 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-host-proc-sys-kernel\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.605099 kubelet[2834]: I0916 04:53:01.604540 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-hostproc\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.605099 kubelet[2834]: I0916 04:53:01.604572 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/991ab742-1070-4287-bca7-0fce1631e07b-hubble-tls\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.606681 kubelet[2834]: I0916 04:53:01.606641 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/991ab742-1070-4287-bca7-0fce1631e07b-cilium-config-path\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.607046 kubelet[2834]: I0916 04:53:01.606880 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-etc-cni-netd\") pod \"cilium-24tpc\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " pod="kube-system/cilium-24tpc" Sep 16 04:53:01.614130 kubelet[2834]: E0916 04:53:01.614070 2834 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 16 04:53:01.614130 kubelet[2834]: E0916 04:53:01.614109 2834 projected.go:194] Error preparing data for projected volume kube-api-access-svx8n for pod kube-system/kube-proxy-tqgn9: configmap "kube-root-ca.crt" not found Sep 16 04:53:01.614470 kubelet[2834]: E0916 04:53:01.614199 2834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c85505fa-ea65-4ff5-96fe-d6802531b0fe-kube-api-access-svx8n podName:c85505fa-ea65-4ff5-96fe-d6802531b0fe nodeName:}" failed. No retries permitted until 2025-09-16 04:53:02.114171026 +0000 UTC m=+6.416822794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-svx8n" (UniqueName: "kubernetes.io/projected/c85505fa-ea65-4ff5-96fe-d6802531b0fe-kube-api-access-svx8n") pod "kube-proxy-tqgn9" (UID: "c85505fa-ea65-4ff5-96fe-d6802531b0fe") : configmap "kube-root-ca.crt" not found Sep 16 04:53:01.866391 containerd[1551]: time="2025-09-16T04:53:01.866337323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24tpc,Uid:991ab742-1070-4287-bca7-0fce1631e07b,Namespace:kube-system,Attempt:0,}" Sep 16 04:53:01.867091 systemd[1]: Created slice kubepods-besteffort-pod8be2c115_1037_4329_a72f_fb2f750de3a3.slice - libcontainer container kubepods-besteffort-pod8be2c115_1037_4329_a72f_fb2f750de3a3.slice. Sep 16 04:53:01.910492 kubelet[2834]: I0916 04:53:01.908831 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv2f6\" (UniqueName: \"kubernetes.io/projected/8be2c115-1037-4329-a72f-fb2f750de3a3-kube-api-access-pv2f6\") pod \"cilium-operator-5d85765b45-kbmdb\" (UID: \"8be2c115-1037-4329-a72f-fb2f750de3a3\") " pod="kube-system/cilium-operator-5d85765b45-kbmdb" Sep 16 04:53:01.918852 kubelet[2834]: I0916 04:53:01.914703 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8be2c115-1037-4329-a72f-fb2f750de3a3-cilium-config-path\") pod \"cilium-operator-5d85765b45-kbmdb\" (UID: \"8be2c115-1037-4329-a72f-fb2f750de3a3\") " pod="kube-system/cilium-operator-5d85765b45-kbmdb" Sep 16 04:53:01.934256 containerd[1551]: time="2025-09-16T04:53:01.934180575Z" level=info msg="connecting to shim 9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c" address="unix:///run/containerd/s/1875e8bce9f3daf4c7fd4414621fdbe845e31c68f71832ce34c9ce2226cb8576" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:53:01.977854 systemd[1]: Started cri-containerd-9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c.scope - libcontainer container 9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c. Sep 16 04:53:02.027219 containerd[1551]: time="2025-09-16T04:53:02.027165252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24tpc,Uid:991ab742-1070-4287-bca7-0fce1631e07b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\"" Sep 16 04:53:02.031918 containerd[1551]: time="2025-09-16T04:53:02.031200489Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 16 04:53:02.121731 containerd[1551]: time="2025-09-16T04:53:02.121502508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqgn9,Uid:c85505fa-ea65-4ff5-96fe-d6802531b0fe,Namespace:kube-system,Attempt:0,}" Sep 16 04:53:02.146381 containerd[1551]: time="2025-09-16T04:53:02.146316798Z" level=info msg="connecting to shim 82baa0a55e22c709b5d22ebac8118204d03329703a8c25eb1c446f044a6d6b34" address="unix:///run/containerd/s/8d9a69eb6ff3503b59b730024acd9470d003a88dc186eaea4112c5c5c038c184" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:53:02.179369 containerd[1551]: time="2025-09-16T04:53:02.179251443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kbmdb,Uid:8be2c115-1037-4329-a72f-fb2f750de3a3,Namespace:kube-system,Attempt:0,}" Sep 16 04:53:02.181849 systemd[1]: Started cri-containerd-82baa0a55e22c709b5d22ebac8118204d03329703a8c25eb1c446f044a6d6b34.scope - libcontainer container 82baa0a55e22c709b5d22ebac8118204d03329703a8c25eb1c446f044a6d6b34. Sep 16 04:53:02.218406 containerd[1551]: time="2025-09-16T04:53:02.218340011Z" level=info msg="connecting to shim 7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290" address="unix:///run/containerd/s/b8ce46d484ca81a59d8b5c12e694a406e4cda29ce0805c651987e884fc6cb4f1" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:53:02.229971 containerd[1551]: time="2025-09-16T04:53:02.229926363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqgn9,Uid:c85505fa-ea65-4ff5-96fe-d6802531b0fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"82baa0a55e22c709b5d22ebac8118204d03329703a8c25eb1c446f044a6d6b34\"" Sep 16 04:53:02.238641 containerd[1551]: time="2025-09-16T04:53:02.237762042Z" level=info msg="CreateContainer within sandbox \"82baa0a55e22c709b5d22ebac8118204d03329703a8c25eb1c446f044a6d6b34\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:53:02.251773 containerd[1551]: time="2025-09-16T04:53:02.251727256Z" level=info msg="Container 79ff7659a49c7c135dd6b31c827f3beac3717fe18c02b44d68391af40fb85681: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:02.265933 systemd[1]: Started cri-containerd-7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290.scope - libcontainer container 7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290. Sep 16 04:53:02.269911 containerd[1551]: time="2025-09-16T04:53:02.269865465Z" level=info msg="CreateContainer within sandbox \"82baa0a55e22c709b5d22ebac8118204d03329703a8c25eb1c446f044a6d6b34\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79ff7659a49c7c135dd6b31c827f3beac3717fe18c02b44d68391af40fb85681\"" Sep 16 04:53:02.271689 containerd[1551]: time="2025-09-16T04:53:02.271592679Z" level=info msg="StartContainer for \"79ff7659a49c7c135dd6b31c827f3beac3717fe18c02b44d68391af40fb85681\"" Sep 16 04:53:02.281511 containerd[1551]: time="2025-09-16T04:53:02.281405592Z" level=info msg="connecting to shim 79ff7659a49c7c135dd6b31c827f3beac3717fe18c02b44d68391af40fb85681" address="unix:///run/containerd/s/8d9a69eb6ff3503b59b730024acd9470d003a88dc186eaea4112c5c5c038c184" protocol=ttrpc version=3 Sep 16 04:53:02.320051 systemd[1]: Started cri-containerd-79ff7659a49c7c135dd6b31c827f3beac3717fe18c02b44d68391af40fb85681.scope - libcontainer container 79ff7659a49c7c135dd6b31c827f3beac3717fe18c02b44d68391af40fb85681. Sep 16 04:53:02.372401 containerd[1551]: time="2025-09-16T04:53:02.372135546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kbmdb,Uid:8be2c115-1037-4329-a72f-fb2f750de3a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290\"" Sep 16 04:53:02.408154 containerd[1551]: time="2025-09-16T04:53:02.408070542Z" level=info msg="StartContainer for \"79ff7659a49c7c135dd6b31c827f3beac3717fe18c02b44d68391af40fb85681\" returns successfully" Sep 16 04:53:02.981310 kubelet[2834]: I0916 04:53:02.981241 2834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tqgn9" podStartSLOduration=1.9812190090000001 podStartE2EDuration="1.981219009s" podCreationTimestamp="2025-09-16 04:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:53:02.981133346 +0000 UTC m=+7.283785119" watchObservedRunningTime="2025-09-16 04:53:02.981219009 +0000 UTC m=+7.283870783" Sep 16 04:53:09.290683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount5210155.mount: Deactivated successfully. Sep 16 04:53:12.194917 containerd[1551]: time="2025-09-16T04:53:12.194838376Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:12.196337 containerd[1551]: time="2025-09-16T04:53:12.196142334Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 16 04:53:12.197689 containerd[1551]: time="2025-09-16T04:53:12.197648548Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:12.199784 containerd[1551]: time="2025-09-16T04:53:12.199741491Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.168486034s" Sep 16 04:53:12.200057 containerd[1551]: time="2025-09-16T04:53:12.199931475Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 16 04:53:12.203086 containerd[1551]: time="2025-09-16T04:53:12.202863485Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 16 04:53:12.205562 containerd[1551]: time="2025-09-16T04:53:12.205424717Z" level=info msg="CreateContainer within sandbox \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:53:12.215671 containerd[1551]: time="2025-09-16T04:53:12.215182412Z" level=info msg="Container dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:12.228026 containerd[1551]: time="2025-09-16T04:53:12.227970922Z" level=info msg="CreateContainer within sandbox \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\"" Sep 16 04:53:12.230641 containerd[1551]: time="2025-09-16T04:53:12.230546711Z" level=info msg="StartContainer for \"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\"" Sep 16 04:53:12.232335 containerd[1551]: time="2025-09-16T04:53:12.232296229Z" level=info msg="connecting to shim dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223" address="unix:///run/containerd/s/1875e8bce9f3daf4c7fd4414621fdbe845e31c68f71832ce34c9ce2226cb8576" protocol=ttrpc version=3 Sep 16 04:53:12.267891 systemd[1]: Started cri-containerd-dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223.scope - libcontainer container dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223. Sep 16 04:53:12.324270 containerd[1551]: time="2025-09-16T04:53:12.324067620Z" level=info msg="StartContainer for \"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\" returns successfully" Sep 16 04:53:12.345219 systemd[1]: cri-containerd-dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223.scope: Deactivated successfully. Sep 16 04:53:12.352111 containerd[1551]: time="2025-09-16T04:53:12.351937424Z" level=info msg="received exit event container_id:\"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\" id:\"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\" pid:3251 exited_at:{seconds:1757998392 nanos:351409500}" Sep 16 04:53:12.352293 containerd[1551]: time="2025-09-16T04:53:12.352185472Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\" id:\"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\" pid:3251 exited_at:{seconds:1757998392 nanos:351409500}" Sep 16 04:53:12.389885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223-rootfs.mount: Deactivated successfully. Sep 16 04:53:15.022313 containerd[1551]: time="2025-09-16T04:53:15.022066354Z" level=info msg="CreateContainer within sandbox \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:53:15.043758 containerd[1551]: time="2025-09-16T04:53:15.043697474Z" level=info msg="Container e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:15.079247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460276940.mount: Deactivated successfully. Sep 16 04:53:15.085300 containerd[1551]: time="2025-09-16T04:53:15.085242128Z" level=info msg="CreateContainer within sandbox \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\"" Sep 16 04:53:15.086822 containerd[1551]: time="2025-09-16T04:53:15.086772547Z" level=info msg="StartContainer for \"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\"" Sep 16 04:53:15.089576 containerd[1551]: time="2025-09-16T04:53:15.089455735Z" level=info msg="connecting to shim e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e" address="unix:///run/containerd/s/1875e8bce9f3daf4c7fd4414621fdbe845e31c68f71832ce34c9ce2226cb8576" protocol=ttrpc version=3 Sep 16 04:53:15.121956 systemd[1]: Started cri-containerd-e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e.scope - libcontainer container e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e. Sep 16 04:53:15.170448 containerd[1551]: time="2025-09-16T04:53:15.170296399Z" level=info msg="StartContainer for \"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\" returns successfully" Sep 16 04:53:15.193028 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:53:15.194437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:53:15.195144 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:53:15.199057 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:53:15.203169 systemd[1]: cri-containerd-e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e.scope: Deactivated successfully. Sep 16 04:53:15.209218 containerd[1551]: time="2025-09-16T04:53:15.209166492Z" level=info msg="received exit event container_id:\"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\" id:\"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\" pid:3302 exited_at:{seconds:1757998395 nanos:206426771}" Sep 16 04:53:15.209881 containerd[1551]: time="2025-09-16T04:53:15.209814780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\" id:\"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\" pid:3302 exited_at:{seconds:1757998395 nanos:206426771}" Sep 16 04:53:15.243454 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:53:16.034580 containerd[1551]: time="2025-09-16T04:53:16.034508611Z" level=info msg="CreateContainer within sandbox \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:53:16.043265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e-rootfs.mount: Deactivated successfully. Sep 16 04:53:16.080516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603711212.mount: Deactivated successfully. Sep 16 04:53:16.083846 containerd[1551]: time="2025-09-16T04:53:16.083802207Z" level=info msg="Container 2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:16.104652 containerd[1551]: time="2025-09-16T04:53:16.104091688Z" level=info msg="CreateContainer within sandbox \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\"" Sep 16 04:53:16.107729 containerd[1551]: time="2025-09-16T04:53:16.107680617Z" level=info msg="StartContainer for \"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\"" Sep 16 04:53:16.112558 containerd[1551]: time="2025-09-16T04:53:16.112478719Z" level=info msg="connecting to shim 2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09" address="unix:///run/containerd/s/1875e8bce9f3daf4c7fd4414621fdbe845e31c68f71832ce34c9ce2226cb8576" protocol=ttrpc version=3 Sep 16 04:53:16.166916 systemd[1]: Started cri-containerd-2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09.scope - libcontainer container 2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09. Sep 16 04:53:16.269399 systemd[1]: cri-containerd-2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09.scope: Deactivated successfully. Sep 16 04:53:16.276182 containerd[1551]: time="2025-09-16T04:53:16.274799117Z" level=info msg="StartContainer for \"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\" returns successfully" Sep 16 04:53:16.276182 containerd[1551]: time="2025-09-16T04:53:16.275411421Z" level=info msg="received exit event container_id:\"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\" id:\"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\" pid:3360 exited_at:{seconds:1757998396 nanos:274768487}" Sep 16 04:53:16.278349 containerd[1551]: time="2025-09-16T04:53:16.278313137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\" id:\"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\" pid:3360 exited_at:{seconds:1757998396 nanos:274768487}" Sep 16 04:53:16.345262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09-rootfs.mount: Deactivated successfully. Sep 16 04:53:16.565720 containerd[1551]: time="2025-09-16T04:53:16.565655268Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:16.566880 containerd[1551]: time="2025-09-16T04:53:16.566655119Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 16 04:53:16.567982 containerd[1551]: time="2025-09-16T04:53:16.567940394Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:16.569764 containerd[1551]: time="2025-09-16T04:53:16.569724984Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.366812515s" Sep 16 04:53:16.569922 containerd[1551]: time="2025-09-16T04:53:16.569895175Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 16 04:53:16.573176 containerd[1551]: time="2025-09-16T04:53:16.573102660Z" level=info msg="CreateContainer within sandbox \"7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 16 04:53:16.583045 containerd[1551]: time="2025-09-16T04:53:16.582988497Z" level=info msg="Container ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:16.596003 containerd[1551]: time="2025-09-16T04:53:16.595505116Z" level=info msg="CreateContainer within sandbox \"7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\"" Sep 16 04:53:16.597382 containerd[1551]: time="2025-09-16T04:53:16.597328815Z" level=info msg="StartContainer for \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\"" Sep 16 04:53:16.599345 containerd[1551]: time="2025-09-16T04:53:16.599300332Z" level=info msg="connecting to shim ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0" address="unix:///run/containerd/s/b8ce46d484ca81a59d8b5c12e694a406e4cda29ce0805c651987e884fc6cb4f1" protocol=ttrpc version=3 Sep 16 04:53:16.623900 systemd[1]: Started cri-containerd-ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0.scope - libcontainer container ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0. Sep 16 04:53:16.672044 containerd[1551]: time="2025-09-16T04:53:16.671878205Z" level=info msg="StartContainer for \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\" returns successfully" Sep 16 04:53:17.045105 containerd[1551]: time="2025-09-16T04:53:17.044919741Z" level=info msg="CreateContainer within sandbox \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:53:17.073219 containerd[1551]: time="2025-09-16T04:53:17.073171539Z" level=info msg="Container 1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:17.099904 containerd[1551]: time="2025-09-16T04:53:17.099829593Z" level=info msg="CreateContainer within sandbox \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\"" Sep 16 04:53:17.102138 containerd[1551]: time="2025-09-16T04:53:17.102086824Z" level=info msg="StartContainer for \"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\"" Sep 16 04:53:17.103592 containerd[1551]: time="2025-09-16T04:53:17.103550841Z" level=info msg="connecting to shim 1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493" address="unix:///run/containerd/s/1875e8bce9f3daf4c7fd4414621fdbe845e31c68f71832ce34c9ce2226cb8576" protocol=ttrpc version=3 Sep 16 04:53:17.165308 systemd[1]: Started cri-containerd-1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493.scope - libcontainer container 1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493. Sep 16 04:53:17.176150 kubelet[2834]: I0916 04:53:17.175970 2834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-kbmdb" podStartSLOduration=1.980779129 podStartE2EDuration="16.175920537s" podCreationTimestamp="2025-09-16 04:53:01 +0000 UTC" firstStartedPulling="2025-09-16 04:53:02.375868633 +0000 UTC m=+6.678520400" lastFinishedPulling="2025-09-16 04:53:16.571010049 +0000 UTC m=+20.873661808" observedRunningTime="2025-09-16 04:53:17.175070685 +0000 UTC m=+21.477722460" watchObservedRunningTime="2025-09-16 04:53:17.175920537 +0000 UTC m=+21.478572314" Sep 16 04:53:17.239775 containerd[1551]: time="2025-09-16T04:53:17.239721520Z" level=info msg="StartContainer for \"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\" returns successfully" Sep 16 04:53:17.241005 systemd[1]: cri-containerd-1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493.scope: Deactivated successfully. Sep 16 04:53:17.243546 containerd[1551]: time="2025-09-16T04:53:17.243461152Z" level=info msg="received exit event container_id:\"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\" id:\"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\" pid:3436 exited_at:{seconds:1757998397 nanos:242944161}" Sep 16 04:53:17.244169 containerd[1551]: time="2025-09-16T04:53:17.243714057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\" id:\"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\" pid:3436 exited_at:{seconds:1757998397 nanos:242944161}" Sep 16 04:53:17.310733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493-rootfs.mount: Deactivated successfully. Sep 16 04:53:18.058713 containerd[1551]: time="2025-09-16T04:53:18.057486659Z" level=info msg="CreateContainer within sandbox \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:53:18.081842 containerd[1551]: time="2025-09-16T04:53:18.081727598Z" level=info msg="Container 8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:18.096740 containerd[1551]: time="2025-09-16T04:53:18.096680713Z" level=info msg="CreateContainer within sandbox \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\"" Sep 16 04:53:18.099238 containerd[1551]: time="2025-09-16T04:53:18.099172967Z" level=info msg="StartContainer for \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\"" Sep 16 04:53:18.100918 containerd[1551]: time="2025-09-16T04:53:18.100867247Z" level=info msg="connecting to shim 8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61" address="unix:///run/containerd/s/1875e8bce9f3daf4c7fd4414621fdbe845e31c68f71832ce34c9ce2226cb8576" protocol=ttrpc version=3 Sep 16 04:53:18.133863 systemd[1]: Started cri-containerd-8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61.scope - libcontainer container 8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61. Sep 16 04:53:18.206952 containerd[1551]: time="2025-09-16T04:53:18.206831615Z" level=info msg="StartContainer for \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" returns successfully" Sep 16 04:53:18.348718 containerd[1551]: time="2025-09-16T04:53:18.348664825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" id:\"aadf4e25c236644cd6f733babacaaf0948887e010488a1938614fd4a66be30c9\" pid:3503 exited_at:{seconds:1757998398 nanos:347125478}" Sep 16 04:53:18.396745 kubelet[2834]: I0916 04:53:18.394829 2834 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 16 04:53:18.463499 systemd[1]: Created slice kubepods-burstable-pod519b6030_94a6_408f_b20c_688c91780cd4.slice - libcontainer container kubepods-burstable-pod519b6030_94a6_408f_b20c_688c91780cd4.slice. Sep 16 04:53:18.479630 systemd[1]: Created slice kubepods-burstable-podf5ef5755_8008_48fa_aae9_d4fde63fe8b0.slice - libcontainer container kubepods-burstable-podf5ef5755_8008_48fa_aae9_d4fde63fe8b0.slice. Sep 16 04:53:18.539286 kubelet[2834]: I0916 04:53:18.539208 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4cs7\" (UniqueName: \"kubernetes.io/projected/519b6030-94a6-408f-b20c-688c91780cd4-kube-api-access-t4cs7\") pod \"coredns-7c65d6cfc9-w2424\" (UID: \"519b6030-94a6-408f-b20c-688c91780cd4\") " pod="kube-system/coredns-7c65d6cfc9-w2424" Sep 16 04:53:18.539708 kubelet[2834]: I0916 04:53:18.539538 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr77p\" (UniqueName: \"kubernetes.io/projected/f5ef5755-8008-48fa-aae9-d4fde63fe8b0-kube-api-access-tr77p\") pod \"coredns-7c65d6cfc9-pq2vm\" (UID: \"f5ef5755-8008-48fa-aae9-d4fde63fe8b0\") " pod="kube-system/coredns-7c65d6cfc9-pq2vm" Sep 16 04:53:18.539708 kubelet[2834]: I0916 04:53:18.539630 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5ef5755-8008-48fa-aae9-d4fde63fe8b0-config-volume\") pod \"coredns-7c65d6cfc9-pq2vm\" (UID: \"f5ef5755-8008-48fa-aae9-d4fde63fe8b0\") " pod="kube-system/coredns-7c65d6cfc9-pq2vm" Sep 16 04:53:18.539708 kubelet[2834]: I0916 04:53:18.539665 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/519b6030-94a6-408f-b20c-688c91780cd4-config-volume\") pod \"coredns-7c65d6cfc9-w2424\" (UID: \"519b6030-94a6-408f-b20c-688c91780cd4\") " pod="kube-system/coredns-7c65d6cfc9-w2424" Sep 16 04:53:18.771575 containerd[1551]: time="2025-09-16T04:53:18.770778370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w2424,Uid:519b6030-94a6-408f-b20c-688c91780cd4,Namespace:kube-system,Attempt:0,}" Sep 16 04:53:18.795663 containerd[1551]: time="2025-09-16T04:53:18.795207381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pq2vm,Uid:f5ef5755-8008-48fa-aae9-d4fde63fe8b0,Namespace:kube-system,Attempt:0,}" Sep 16 04:53:20.810996 systemd-networkd[1440]: cilium_host: Link UP Sep 16 04:53:20.812826 systemd-networkd[1440]: cilium_net: Link UP Sep 16 04:53:20.814752 systemd-networkd[1440]: cilium_net: Gained carrier Sep 16 04:53:20.816201 systemd-networkd[1440]: cilium_host: Gained carrier Sep 16 04:53:20.945139 systemd-networkd[1440]: cilium_net: Gained IPv6LL Sep 16 04:53:20.967816 systemd-networkd[1440]: cilium_vxlan: Link UP Sep 16 04:53:20.967838 systemd-networkd[1440]: cilium_vxlan: Gained carrier Sep 16 04:53:20.992902 systemd-networkd[1440]: cilium_host: Gained IPv6LL Sep 16 04:53:21.248785 kernel: NET: Registered PF_ALG protocol family Sep 16 04:53:22.139770 systemd-networkd[1440]: lxc_health: Link UP Sep 16 04:53:22.147143 systemd-networkd[1440]: lxc_health: Gained carrier Sep 16 04:53:22.568926 systemd-networkd[1440]: cilium_vxlan: Gained IPv6LL Sep 16 04:53:22.874752 kernel: eth0: renamed from tmp9a8f4 Sep 16 04:53:22.880886 systemd-networkd[1440]: lxcd6bfd05a717e: Link UP Sep 16 04:53:22.886135 systemd-networkd[1440]: lxcd6bfd05a717e: Gained carrier Sep 16 04:53:22.887449 systemd-networkd[1440]: lxc0ebbdd94ccc6: Link UP Sep 16 04:53:22.901171 kernel: eth0: renamed from tmp3592f Sep 16 04:53:22.917157 systemd-networkd[1440]: lxc0ebbdd94ccc6: Gained carrier Sep 16 04:53:23.785772 systemd-networkd[1440]: lxc_health: Gained IPv6LL Sep 16 04:53:23.921175 kubelet[2834]: I0916 04:53:23.921091 2834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-24tpc" podStartSLOduration=12.750099708 podStartE2EDuration="22.920940439s" podCreationTimestamp="2025-09-16 04:53:01 +0000 UTC" firstStartedPulling="2025-09-16 04:53:02.030784625 +0000 UTC m=+6.333436375" lastFinishedPulling="2025-09-16 04:53:12.201625334 +0000 UTC m=+16.504277106" observedRunningTime="2025-09-16 04:53:19.146272935 +0000 UTC m=+23.448924709" watchObservedRunningTime="2025-09-16 04:53:23.920940439 +0000 UTC m=+28.223592213" Sep 16 04:53:23.976825 systemd-networkd[1440]: lxcd6bfd05a717e: Gained IPv6LL Sep 16 04:53:24.872834 systemd-networkd[1440]: lxc0ebbdd94ccc6: Gained IPv6LL Sep 16 04:53:27.091258 ntpd[1640]: Listen normally on 6 cilium_host 192.168.0.229:123 Sep 16 04:53:27.092694 ntpd[1640]: 16 Sep 04:53:27 ntpd[1640]: Listen normally on 6 cilium_host 192.168.0.229:123 Sep 16 04:53:27.092694 ntpd[1640]: 16 Sep 04:53:27 ntpd[1640]: Listen normally on 7 cilium_net [fe80::2c06:abff:fe94:6272%4]:123 Sep 16 04:53:27.092694 ntpd[1640]: 16 Sep 04:53:27 ntpd[1640]: Listen normally on 8 cilium_host [fe80::dca6:b6ff:fe4a:bccd%5]:123 Sep 16 04:53:27.092694 ntpd[1640]: 16 Sep 04:53:27 ntpd[1640]: Listen normally on 9 cilium_vxlan [fe80::c0f4:dfff:fe67:ef72%6]:123 Sep 16 04:53:27.092694 ntpd[1640]: 16 Sep 04:53:27 ntpd[1640]: Listen normally on 10 lxc_health [fe80::d839:9aff:fea4:5087%8]:123 Sep 16 04:53:27.092694 ntpd[1640]: 16 Sep 04:53:27 ntpd[1640]: Listen normally on 11 lxc0ebbdd94ccc6 [fe80::7c69:5eff:fe53:9c71%10]:123 Sep 16 04:53:27.092694 ntpd[1640]: 16 Sep 04:53:27 ntpd[1640]: Listen normally on 12 lxcd6bfd05a717e [fe80::7046:36ff:fe5f:1d28%12]:123 Sep 16 04:53:27.091945 ntpd[1640]: Listen normally on 7 cilium_net [fe80::2c06:abff:fe94:6272%4]:123 Sep 16 04:53:27.092087 ntpd[1640]: Listen normally on 8 cilium_host [fe80::dca6:b6ff:fe4a:bccd%5]:123 Sep 16 04:53:27.092241 ntpd[1640]: Listen normally on 9 cilium_vxlan [fe80::c0f4:dfff:fe67:ef72%6]:123 Sep 16 04:53:27.092328 ntpd[1640]: Listen normally on 10 lxc_health [fe80::d839:9aff:fea4:5087%8]:123 Sep 16 04:53:27.092377 ntpd[1640]: Listen normally on 11 lxc0ebbdd94ccc6 [fe80::7c69:5eff:fe53:9c71%10]:123 Sep 16 04:53:27.092419 ntpd[1640]: Listen normally on 12 lxcd6bfd05a717e [fe80::7046:36ff:fe5f:1d28%12]:123 Sep 16 04:53:27.801290 containerd[1551]: time="2025-09-16T04:53:27.801223189Z" level=info msg="connecting to shim 9a8f4547cd897048b5512e7cf2778ea9973de8c095ee5b3e0e88b6712ab8d516" address="unix:///run/containerd/s/58e13d07cb2a6da72bd336fcdfe6b8511829044bfd924d7d2e3090ded67383c9" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:53:27.806876 containerd[1551]: time="2025-09-16T04:53:27.806814129Z" level=info msg="connecting to shim 3592fefd4f3c5f5ae97795f479a0932fd93c09ef855b8b254c97b3bb71bef249" address="unix:///run/containerd/s/e97f579a7c2fa568be75c67211773870215c737252e54bedacdc81e3285c54cb" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:53:27.900040 systemd[1]: Started cri-containerd-9a8f4547cd897048b5512e7cf2778ea9973de8c095ee5b3e0e88b6712ab8d516.scope - libcontainer container 9a8f4547cd897048b5512e7cf2778ea9973de8c095ee5b3e0e88b6712ab8d516. Sep 16 04:53:27.912597 systemd[1]: Started cri-containerd-3592fefd4f3c5f5ae97795f479a0932fd93c09ef855b8b254c97b3bb71bef249.scope - libcontainer container 3592fefd4f3c5f5ae97795f479a0932fd93c09ef855b8b254c97b3bb71bef249. Sep 16 04:53:28.050695 containerd[1551]: time="2025-09-16T04:53:28.050566040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w2424,Uid:519b6030-94a6-408f-b20c-688c91780cd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a8f4547cd897048b5512e7cf2778ea9973de8c095ee5b3e0e88b6712ab8d516\"" Sep 16 04:53:28.057812 containerd[1551]: time="2025-09-16T04:53:28.056251134Z" level=info msg="CreateContainer within sandbox \"9a8f4547cd897048b5512e7cf2778ea9973de8c095ee5b3e0e88b6712ab8d516\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:53:28.070956 containerd[1551]: time="2025-09-16T04:53:28.070898785Z" level=info msg="Container 282fec6075b94077486fc7fb00fea1c73733919aa27efcb676f0d6ee81deeb3f: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:28.076206 containerd[1551]: time="2025-09-16T04:53:28.076154067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pq2vm,Uid:f5ef5755-8008-48fa-aae9-d4fde63fe8b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3592fefd4f3c5f5ae97795f479a0932fd93c09ef855b8b254c97b3bb71bef249\"" Sep 16 04:53:28.085923 containerd[1551]: time="2025-09-16T04:53:28.085861633Z" level=info msg="CreateContainer within sandbox \"3592fefd4f3c5f5ae97795f479a0932fd93c09ef855b8b254c97b3bb71bef249\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:53:28.088303 containerd[1551]: time="2025-09-16T04:53:28.088251081Z" level=info msg="CreateContainer within sandbox \"9a8f4547cd897048b5512e7cf2778ea9973de8c095ee5b3e0e88b6712ab8d516\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"282fec6075b94077486fc7fb00fea1c73733919aa27efcb676f0d6ee81deeb3f\"" Sep 16 04:53:28.093727 containerd[1551]: time="2025-09-16T04:53:28.093637443Z" level=info msg="StartContainer for \"282fec6075b94077486fc7fb00fea1c73733919aa27efcb676f0d6ee81deeb3f\"" Sep 16 04:53:28.098630 containerd[1551]: time="2025-09-16T04:53:28.098453202Z" level=info msg="connecting to shim 282fec6075b94077486fc7fb00fea1c73733919aa27efcb676f0d6ee81deeb3f" address="unix:///run/containerd/s/58e13d07cb2a6da72bd336fcdfe6b8511829044bfd924d7d2e3090ded67383c9" protocol=ttrpc version=3 Sep 16 04:53:28.116139 containerd[1551]: time="2025-09-16T04:53:28.116092563Z" level=info msg="Container eafc09f0f1e240478cb54f872fdfe7ad2e0bba738aece725cfef0d452f6fbc66: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:28.126038 containerd[1551]: time="2025-09-16T04:53:28.125988151Z" level=info msg="CreateContainer within sandbox \"3592fefd4f3c5f5ae97795f479a0932fd93c09ef855b8b254c97b3bb71bef249\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eafc09f0f1e240478cb54f872fdfe7ad2e0bba738aece725cfef0d452f6fbc66\"" Sep 16 04:53:28.128356 containerd[1551]: time="2025-09-16T04:53:28.127848523Z" level=info msg="StartContainer for \"eafc09f0f1e240478cb54f872fdfe7ad2e0bba738aece725cfef0d452f6fbc66\"" Sep 16 04:53:28.134129 containerd[1551]: time="2025-09-16T04:53:28.133441414Z" level=info msg="connecting to shim eafc09f0f1e240478cb54f872fdfe7ad2e0bba738aece725cfef0d452f6fbc66" address="unix:///run/containerd/s/e97f579a7c2fa568be75c67211773870215c737252e54bedacdc81e3285c54cb" protocol=ttrpc version=3 Sep 16 04:53:28.135878 systemd[1]: Started cri-containerd-282fec6075b94077486fc7fb00fea1c73733919aa27efcb676f0d6ee81deeb3f.scope - libcontainer container 282fec6075b94077486fc7fb00fea1c73733919aa27efcb676f0d6ee81deeb3f. Sep 16 04:53:28.168899 systemd[1]: Started cri-containerd-eafc09f0f1e240478cb54f872fdfe7ad2e0bba738aece725cfef0d452f6fbc66.scope - libcontainer container eafc09f0f1e240478cb54f872fdfe7ad2e0bba738aece725cfef0d452f6fbc66. Sep 16 04:53:28.230210 containerd[1551]: time="2025-09-16T04:53:28.230131040Z" level=info msg="StartContainer for \"282fec6075b94077486fc7fb00fea1c73733919aa27efcb676f0d6ee81deeb3f\" returns successfully" Sep 16 04:53:28.235963 containerd[1551]: time="2025-09-16T04:53:28.235849302Z" level=info msg="StartContainer for \"eafc09f0f1e240478cb54f872fdfe7ad2e0bba738aece725cfef0d452f6fbc66\" returns successfully" Sep 16 04:53:29.135055 kubelet[2834]: I0916 04:53:29.134386 2834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pq2vm" podStartSLOduration=28.134362671 podStartE2EDuration="28.134362671s" podCreationTimestamp="2025-09-16 04:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:53:29.133184887 +0000 UTC m=+33.435836663" watchObservedRunningTime="2025-09-16 04:53:29.134362671 +0000 UTC m=+33.437014447" Sep 16 04:53:29.185372 kubelet[2834]: I0916 04:53:29.184983 2834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w2424" podStartSLOduration=28.184952326 podStartE2EDuration="28.184952326s" podCreationTimestamp="2025-09-16 04:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:53:29.184371821 +0000 UTC m=+33.487023595" watchObservedRunningTime="2025-09-16 04:53:29.184952326 +0000 UTC m=+33.487604102" Sep 16 04:54:09.349518 systemd[1]: Started sshd@10-10.128.0.59:22-139.178.68.195:50084.service - OpenSSH per-connection server daemon (139.178.68.195:50084). Sep 16 04:54:09.655990 sshd[4147]: Accepted publickey for core from 139.178.68.195 port 50084 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:09.657713 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:09.665064 systemd-logind[1532]: New session 10 of user core. Sep 16 04:54:09.671817 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:54:10.002857 sshd[4150]: Connection closed by 139.178.68.195 port 50084 Sep 16 04:54:10.004173 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:10.010550 systemd[1]: sshd@10-10.128.0.59:22-139.178.68.195:50084.service: Deactivated successfully. Sep 16 04:54:10.014054 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:54:10.017033 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:54:10.019309 systemd-logind[1532]: Removed session 10. Sep 16 04:54:15.060526 systemd[1]: Started sshd@11-10.128.0.59:22-139.178.68.195:40070.service - OpenSSH per-connection server daemon (139.178.68.195:40070). Sep 16 04:54:15.370637 sshd[4180]: Accepted publickey for core from 139.178.68.195 port 40070 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:15.372535 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:15.379987 systemd-logind[1532]: New session 11 of user core. Sep 16 04:54:15.389878 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:54:15.666420 sshd[4183]: Connection closed by 139.178.68.195 port 40070 Sep 16 04:54:15.667798 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:15.674182 systemd[1]: sshd@11-10.128.0.59:22-139.178.68.195:40070.service: Deactivated successfully. Sep 16 04:54:15.677661 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:54:15.679312 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:54:15.681535 systemd-logind[1532]: Removed session 11. Sep 16 04:54:20.720903 systemd[1]: Started sshd@12-10.128.0.59:22-139.178.68.195:54800.service - OpenSSH per-connection server daemon (139.178.68.195:54800). Sep 16 04:54:21.028835 sshd[4196]: Accepted publickey for core from 139.178.68.195 port 54800 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:21.030357 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:21.036701 systemd-logind[1532]: New session 12 of user core. Sep 16 04:54:21.044926 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:54:21.320969 sshd[4199]: Connection closed by 139.178.68.195 port 54800 Sep 16 04:54:21.322236 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:21.327576 systemd[1]: sshd@12-10.128.0.59:22-139.178.68.195:54800.service: Deactivated successfully. Sep 16 04:54:21.330980 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:54:21.333306 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:54:21.336469 systemd-logind[1532]: Removed session 12. Sep 16 04:54:26.380728 systemd[1]: Started sshd@13-10.128.0.59:22-139.178.68.195:54804.service - OpenSSH per-connection server daemon (139.178.68.195:54804). Sep 16 04:54:26.689623 sshd[4213]: Accepted publickey for core from 139.178.68.195 port 54804 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:26.691439 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:26.698755 systemd-logind[1532]: New session 13 of user core. Sep 16 04:54:26.710888 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:54:26.980225 sshd[4216]: Connection closed by 139.178.68.195 port 54804 Sep 16 04:54:26.981470 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:26.987346 systemd[1]: sshd@13-10.128.0.59:22-139.178.68.195:54804.service: Deactivated successfully. Sep 16 04:54:26.991005 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:54:26.992690 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:54:26.995160 systemd-logind[1532]: Removed session 13. Sep 16 04:54:27.038466 systemd[1]: Started sshd@14-10.128.0.59:22-139.178.68.195:54810.service - OpenSSH per-connection server daemon (139.178.68.195:54810). Sep 16 04:54:27.358210 sshd[4229]: Accepted publickey for core from 139.178.68.195 port 54810 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:27.360024 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:27.367698 systemd-logind[1532]: New session 14 of user core. Sep 16 04:54:27.374312 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:54:27.729360 sshd[4232]: Connection closed by 139.178.68.195 port 54810 Sep 16 04:54:27.729818 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:27.744061 systemd[1]: sshd@14-10.128.0.59:22-139.178.68.195:54810.service: Deactivated successfully. Sep 16 04:54:27.750001 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:54:27.753014 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:54:27.756477 systemd-logind[1532]: Removed session 14. Sep 16 04:54:27.781838 systemd[1]: Started sshd@15-10.128.0.59:22-139.178.68.195:54826.service - OpenSSH per-connection server daemon (139.178.68.195:54826). Sep 16 04:54:28.095152 sshd[4242]: Accepted publickey for core from 139.178.68.195 port 54826 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:28.096912 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:28.103764 systemd-logind[1532]: New session 15 of user core. Sep 16 04:54:28.115846 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:54:28.395524 sshd[4245]: Connection closed by 139.178.68.195 port 54826 Sep 16 04:54:28.396908 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:28.402984 systemd[1]: sshd@15-10.128.0.59:22-139.178.68.195:54826.service: Deactivated successfully. Sep 16 04:54:28.408542 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:54:28.410392 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:54:28.413433 systemd-logind[1532]: Removed session 15. Sep 16 04:54:33.448686 systemd[1]: Started sshd@16-10.128.0.59:22-139.178.68.195:60898.service - OpenSSH per-connection server daemon (139.178.68.195:60898). Sep 16 04:54:33.749740 sshd[4260]: Accepted publickey for core from 139.178.68.195 port 60898 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:33.751488 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:33.758749 systemd-logind[1532]: New session 16 of user core. Sep 16 04:54:33.765864 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:54:34.046350 sshd[4263]: Connection closed by 139.178.68.195 port 60898 Sep 16 04:54:34.047937 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:34.054703 systemd[1]: sshd@16-10.128.0.59:22-139.178.68.195:60898.service: Deactivated successfully. Sep 16 04:54:34.058558 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:54:34.060760 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:54:34.063128 systemd-logind[1532]: Removed session 16. Sep 16 04:54:39.103020 systemd[1]: Started sshd@17-10.128.0.59:22-139.178.68.195:60914.service - OpenSSH per-connection server daemon (139.178.68.195:60914). Sep 16 04:54:39.414024 sshd[4277]: Accepted publickey for core from 139.178.68.195 port 60914 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:39.415661 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:39.423182 systemd-logind[1532]: New session 17 of user core. Sep 16 04:54:39.430828 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:54:39.705920 sshd[4280]: Connection closed by 139.178.68.195 port 60914 Sep 16 04:54:39.706821 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:39.713135 systemd[1]: sshd@17-10.128.0.59:22-139.178.68.195:60914.service: Deactivated successfully. Sep 16 04:54:39.716549 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:54:39.717948 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:54:39.721282 systemd-logind[1532]: Removed session 17. Sep 16 04:54:39.760592 systemd[1]: Started sshd@18-10.128.0.59:22-139.178.68.195:60926.service - OpenSSH per-connection server daemon (139.178.68.195:60926). Sep 16 04:54:40.058199 sshd[4292]: Accepted publickey for core from 139.178.68.195 port 60926 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:40.060015 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:40.067630 systemd-logind[1532]: New session 18 of user core. Sep 16 04:54:40.072920 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:54:40.414884 sshd[4295]: Connection closed by 139.178.68.195 port 60926 Sep 16 04:54:40.415954 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:40.422406 systemd[1]: sshd@18-10.128.0.59:22-139.178.68.195:60926.service: Deactivated successfully. Sep 16 04:54:40.425773 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:54:40.427415 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:54:40.430043 systemd-logind[1532]: Removed session 18. Sep 16 04:54:40.473742 systemd[1]: Started sshd@19-10.128.0.59:22-139.178.68.195:59960.service - OpenSSH per-connection server daemon (139.178.68.195:59960). Sep 16 04:54:40.774278 sshd[4304]: Accepted publickey for core from 139.178.68.195 port 59960 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:40.776077 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:40.783004 systemd-logind[1532]: New session 19 of user core. Sep 16 04:54:40.795847 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:54:42.529768 sshd[4307]: Connection closed by 139.178.68.195 port 59960 Sep 16 04:54:42.530935 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:42.540249 systemd[1]: sshd@19-10.128.0.59:22-139.178.68.195:59960.service: Deactivated successfully. Sep 16 04:54:42.545499 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:54:42.546475 systemd[1]: session-19.scope: Consumed 637ms CPU time, 69.2M memory peak. Sep 16 04:54:42.550156 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:54:42.552600 systemd-logind[1532]: Removed session 19. Sep 16 04:54:42.588200 systemd[1]: Started sshd@20-10.128.0.59:22-139.178.68.195:59968.service - OpenSSH per-connection server daemon (139.178.68.195:59968). Sep 16 04:54:42.909601 sshd[4324]: Accepted publickey for core from 139.178.68.195 port 59968 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:42.911547 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:42.917717 systemd-logind[1532]: New session 20 of user core. Sep 16 04:54:42.921820 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:54:43.349802 sshd[4327]: Connection closed by 139.178.68.195 port 59968 Sep 16 04:54:43.350924 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:43.357313 systemd[1]: sshd@20-10.128.0.59:22-139.178.68.195:59968.service: Deactivated successfully. Sep 16 04:54:43.360764 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:54:43.362699 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:54:43.365227 systemd-logind[1532]: Removed session 20. Sep 16 04:54:43.403824 systemd[1]: Started sshd@21-10.128.0.59:22-139.178.68.195:59976.service - OpenSSH per-connection server daemon (139.178.68.195:59976). Sep 16 04:54:43.710184 sshd[4337]: Accepted publickey for core from 139.178.68.195 port 59976 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:43.711581 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:43.718677 systemd-logind[1532]: New session 21 of user core. Sep 16 04:54:43.723873 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:54:44.002192 sshd[4340]: Connection closed by 139.178.68.195 port 59976 Sep 16 04:54:44.002972 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:44.009507 systemd[1]: sshd@21-10.128.0.59:22-139.178.68.195:59976.service: Deactivated successfully. Sep 16 04:54:44.013366 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:54:44.015491 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:54:44.017802 systemd-logind[1532]: Removed session 21. Sep 16 04:54:49.060870 systemd[1]: Started sshd@22-10.128.0.59:22-139.178.68.195:59984.service - OpenSSH per-connection server daemon (139.178.68.195:59984). Sep 16 04:54:49.368277 sshd[4352]: Accepted publickey for core from 139.178.68.195 port 59984 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:49.371134 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:49.381154 systemd-logind[1532]: New session 22 of user core. Sep 16 04:54:49.390812 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 04:54:49.657526 sshd[4356]: Connection closed by 139.178.68.195 port 59984 Sep 16 04:54:49.658270 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:49.664041 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. Sep 16 04:54:49.665315 systemd[1]: sshd@22-10.128.0.59:22-139.178.68.195:59984.service: Deactivated successfully. Sep 16 04:54:49.668081 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 04:54:49.671881 systemd-logind[1532]: Removed session 22. Sep 16 04:54:54.715884 systemd[1]: Started sshd@23-10.128.0.59:22-139.178.68.195:51866.service - OpenSSH per-connection server daemon (139.178.68.195:51866). Sep 16 04:54:55.019578 sshd[4371]: Accepted publickey for core from 139.178.68.195 port 51866 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:54:55.021353 sshd-session[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:54:55.028280 systemd-logind[1532]: New session 23 of user core. Sep 16 04:54:55.032816 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 04:54:55.313406 sshd[4374]: Connection closed by 139.178.68.195 port 51866 Sep 16 04:54:55.314818 sshd-session[4371]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:55.322145 systemd[1]: sshd@23-10.128.0.59:22-139.178.68.195:51866.service: Deactivated successfully. Sep 16 04:54:55.325771 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 04:54:55.327468 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. Sep 16 04:54:55.330098 systemd-logind[1532]: Removed session 23. Sep 16 04:55:00.370040 systemd[1]: Started sshd@24-10.128.0.59:22-139.178.68.195:56562.service - OpenSSH per-connection server daemon (139.178.68.195:56562). Sep 16 04:55:00.672762 sshd[4388]: Accepted publickey for core from 139.178.68.195 port 56562 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:55:00.673795 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:00.681214 systemd-logind[1532]: New session 24 of user core. Sep 16 04:55:00.690863 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 04:55:00.956481 sshd[4391]: Connection closed by 139.178.68.195 port 56562 Sep 16 04:55:00.957913 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:00.963644 systemd[1]: sshd@24-10.128.0.59:22-139.178.68.195:56562.service: Deactivated successfully. Sep 16 04:55:00.967509 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 04:55:00.969914 systemd-logind[1532]: Session 24 logged out. Waiting for processes to exit. Sep 16 04:55:00.971908 systemd-logind[1532]: Removed session 24. Sep 16 04:55:01.015441 systemd[1]: Started sshd@25-10.128.0.59:22-139.178.68.195:56578.service - OpenSSH per-connection server daemon (139.178.68.195:56578). Sep 16 04:55:01.340687 sshd[4403]: Accepted publickey for core from 139.178.68.195 port 56578 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:55:01.342085 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:01.350973 systemd-logind[1532]: New session 25 of user core. Sep 16 04:55:01.356066 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 04:55:03.528807 containerd[1551]: time="2025-09-16T04:55:03.528735115Z" level=info msg="StopContainer for \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\" with timeout 30 (s)" Sep 16 04:55:03.531953 containerd[1551]: time="2025-09-16T04:55:03.531903850Z" level=info msg="Stop container \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\" with signal terminated" Sep 16 04:55:03.552381 systemd[1]: cri-containerd-ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0.scope: Deactivated successfully. Sep 16 04:55:03.556884 containerd[1551]: time="2025-09-16T04:55:03.556737610Z" level=info msg="received exit event container_id:\"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\" id:\"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\" pid:3402 exited_at:{seconds:1757998503 nanos:556195516}" Sep 16 04:55:03.557902 containerd[1551]: time="2025-09-16T04:55:03.557466447Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\" id:\"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\" pid:3402 exited_at:{seconds:1757998503 nanos:556195516}" Sep 16 04:55:03.583447 containerd[1551]: time="2025-09-16T04:55:03.583356291Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:55:03.596145 containerd[1551]: time="2025-09-16T04:55:03.595990724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" id:\"44ee4362cd8c1434f637ab02796ae2f7f2ffe0e5c397867b9da4368c282f8d38\" pid:4434 exited_at:{seconds:1757998503 nanos:594194193}" Sep 16 04:55:03.602572 containerd[1551]: time="2025-09-16T04:55:03.602474962Z" level=info msg="StopContainer for \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" with timeout 2 (s)" Sep 16 04:55:03.605808 containerd[1551]: time="2025-09-16T04:55:03.605688579Z" level=info msg="Stop container \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" with signal terminated" Sep 16 04:55:03.611685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0-rootfs.mount: Deactivated successfully. Sep 16 04:55:03.624508 systemd-networkd[1440]: lxc_health: Link DOWN Sep 16 04:55:03.626420 systemd-networkd[1440]: lxc_health: Lost carrier Sep 16 04:55:03.646478 systemd[1]: cri-containerd-8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61.scope: Deactivated successfully. Sep 16 04:55:03.647417 systemd[1]: cri-containerd-8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61.scope: Consumed 9.258s CPU time, 127.4M memory peak, 128K read from disk, 13.3M written to disk. Sep 16 04:55:03.650115 containerd[1551]: time="2025-09-16T04:55:03.647416949Z" level=info msg="received exit event container_id:\"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" id:\"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" pid:3474 exited_at:{seconds:1757998503 nanos:646789502}" Sep 16 04:55:03.650115 containerd[1551]: time="2025-09-16T04:55:03.648187246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" id:\"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" pid:3474 exited_at:{seconds:1757998503 nanos:646789502}" Sep 16 04:55:03.657688 containerd[1551]: time="2025-09-16T04:55:03.657640337Z" level=info msg="StopContainer for \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\" returns successfully" Sep 16 04:55:03.661008 containerd[1551]: time="2025-09-16T04:55:03.660957513Z" level=info msg="StopPodSandbox for \"7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290\"" Sep 16 04:55:03.661698 containerd[1551]: time="2025-09-16T04:55:03.661659362Z" level=info msg="Container to stop \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:55:03.682601 systemd[1]: cri-containerd-7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290.scope: Deactivated successfully. Sep 16 04:55:03.686532 containerd[1551]: time="2025-09-16T04:55:03.686345168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290\" id:\"7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290\" pid:3037 exit_status:137 exited_at:{seconds:1757998503 nanos:686044971}" Sep 16 04:55:03.711176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61-rootfs.mount: Deactivated successfully. Sep 16 04:55:03.725099 containerd[1551]: time="2025-09-16T04:55:03.725039101Z" level=info msg="StopContainer for \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" returns successfully" Sep 16 04:55:03.726994 containerd[1551]: time="2025-09-16T04:55:03.726930571Z" level=info msg="StopPodSandbox for \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\"" Sep 16 04:55:03.727148 containerd[1551]: time="2025-09-16T04:55:03.727036009Z" level=info msg="Container to stop \"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:55:03.727148 containerd[1551]: time="2025-09-16T04:55:03.727057926Z" level=info msg="Container to stop \"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:55:03.727148 containerd[1551]: time="2025-09-16T04:55:03.727074540Z" level=info msg="Container to stop \"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:55:03.727148 containerd[1551]: time="2025-09-16T04:55:03.727092164Z" level=info msg="Container to stop \"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:55:03.727148 containerd[1551]: time="2025-09-16T04:55:03.727108346Z" level=info msg="Container to stop \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:55:03.741347 systemd[1]: cri-containerd-9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c.scope: Deactivated successfully. Sep 16 04:55:03.758123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290-rootfs.mount: Deactivated successfully. Sep 16 04:55:03.763637 containerd[1551]: time="2025-09-16T04:55:03.763215085Z" level=info msg="shim disconnected" id=7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290 namespace=k8s.io Sep 16 04:55:03.763637 containerd[1551]: time="2025-09-16T04:55:03.763363043Z" level=warning msg="cleaning up after shim disconnected" id=7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290 namespace=k8s.io Sep 16 04:55:03.763637 containerd[1551]: time="2025-09-16T04:55:03.763381125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:55:03.763969 containerd[1551]: time="2025-09-16T04:55:03.763798526Z" level=info msg="received exit event sandbox_id:\"7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290\" exit_status:137 exited_at:{seconds:1757998503 nanos:686044971}" Sep 16 04:55:03.768962 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290-shm.mount: Deactivated successfully. Sep 16 04:55:03.770506 containerd[1551]: time="2025-09-16T04:55:03.770462040Z" level=info msg="TearDown network for sandbox \"7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290\" successfully" Sep 16 04:55:03.770647 containerd[1551]: time="2025-09-16T04:55:03.770520710Z" level=info msg="StopPodSandbox for \"7ef65167f0932e3fcb255ab33b182def8fb0fc34a8a980d6a88db9d9a0e90290\" returns successfully" Sep 16 04:55:03.818576 containerd[1551]: time="2025-09-16T04:55:03.817087576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" id:\"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" pid:2941 exit_status:137 exited_at:{seconds:1757998503 nanos:745284951}" Sep 16 04:55:03.825763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c-rootfs.mount: Deactivated successfully. Sep 16 04:55:03.829017 containerd[1551]: time="2025-09-16T04:55:03.828722376Z" level=info msg="shim disconnected" id=9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c namespace=k8s.io Sep 16 04:55:03.829017 containerd[1551]: time="2025-09-16T04:55:03.828763751Z" level=warning msg="cleaning up after shim disconnected" id=9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c namespace=k8s.io Sep 16 04:55:03.829017 containerd[1551]: time="2025-09-16T04:55:03.828776947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:55:03.830737 containerd[1551]: time="2025-09-16T04:55:03.830684793Z" level=info msg="received exit event sandbox_id:\"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" exit_status:137 exited_at:{seconds:1757998503 nanos:745284951}" Sep 16 04:55:03.834997 containerd[1551]: time="2025-09-16T04:55:03.834958591Z" level=info msg="TearDown network for sandbox \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" successfully" Sep 16 04:55:03.834997 containerd[1551]: time="2025-09-16T04:55:03.834993292Z" level=info msg="StopPodSandbox for \"9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c\" returns successfully" Sep 16 04:55:03.840008 kubelet[2834]: I0916 04:55:03.838970 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv2f6\" (UniqueName: \"kubernetes.io/projected/8be2c115-1037-4329-a72f-fb2f750de3a3-kube-api-access-pv2f6\") pod \"8be2c115-1037-4329-a72f-fb2f750de3a3\" (UID: \"8be2c115-1037-4329-a72f-fb2f750de3a3\") " Sep 16 04:55:03.840008 kubelet[2834]: I0916 04:55:03.839055 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8be2c115-1037-4329-a72f-fb2f750de3a3-cilium-config-path\") pod \"8be2c115-1037-4329-a72f-fb2f750de3a3\" (UID: \"8be2c115-1037-4329-a72f-fb2f750de3a3\") " Sep 16 04:55:03.847946 kubelet[2834]: I0916 04:55:03.846590 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be2c115-1037-4329-a72f-fb2f750de3a3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8be2c115-1037-4329-a72f-fb2f750de3a3" (UID: "8be2c115-1037-4329-a72f-fb2f750de3a3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 16 04:55:03.850157 kubelet[2834]: I0916 04:55:03.850116 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be2c115-1037-4329-a72f-fb2f750de3a3-kube-api-access-pv2f6" (OuterVolumeSpecName: "kube-api-access-pv2f6") pod "8be2c115-1037-4329-a72f-fb2f750de3a3" (UID: "8be2c115-1037-4329-a72f-fb2f750de3a3"). InnerVolumeSpecName "kube-api-access-pv2f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 04:55:03.924497 systemd[1]: Removed slice kubepods-besteffort-pod8be2c115_1037_4329_a72f_fb2f750de3a3.slice - libcontainer container kubepods-besteffort-pod8be2c115_1037_4329_a72f_fb2f750de3a3.slice. Sep 16 04:55:03.939417 kubelet[2834]: I0916 04:55:03.939350 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-xtables-lock\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.939417 kubelet[2834]: I0916 04:55:03.939418 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2kf9\" (UniqueName: \"kubernetes.io/projected/991ab742-1070-4287-bca7-0fce1631e07b-kube-api-access-p2kf9\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.939698 kubelet[2834]: I0916 04:55:03.939446 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cilium-cgroup\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.939698 kubelet[2834]: I0916 04:55:03.939473 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/991ab742-1070-4287-bca7-0fce1631e07b-hubble-tls\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.939698 kubelet[2834]: I0916 04:55:03.939504 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/991ab742-1070-4287-bca7-0fce1631e07b-cilium-config-path\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.939698 kubelet[2834]: I0916 04:55:03.939528 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-bpf-maps\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.939698 kubelet[2834]: I0916 04:55:03.939553 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-etc-cni-netd\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.939698 kubelet[2834]: I0916 04:55:03.939582 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/991ab742-1070-4287-bca7-0fce1631e07b-clustermesh-secrets\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.940001 kubelet[2834]: I0916 04:55:03.939637 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cni-path\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.940001 kubelet[2834]: I0916 04:55:03.939670 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-lib-modules\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.940001 kubelet[2834]: I0916 04:55:03.939696 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cilium-run\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.940001 kubelet[2834]: I0916 04:55:03.939720 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-host-proc-sys-kernel\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.940001 kubelet[2834]: I0916 04:55:03.939749 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-host-proc-sys-net\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.940001 kubelet[2834]: I0916 04:55:03.939775 2834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-hostproc\") pod \"991ab742-1070-4287-bca7-0fce1631e07b\" (UID: \"991ab742-1070-4287-bca7-0fce1631e07b\") " Sep 16 04:55:03.940298 kubelet[2834]: I0916 04:55:03.939840 2834 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8be2c115-1037-4329-a72f-fb2f750de3a3-cilium-config-path\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:03.940298 kubelet[2834]: I0916 04:55:03.939861 2834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv2f6\" (UniqueName: \"kubernetes.io/projected/8be2c115-1037-4329-a72f-fb2f750de3a3-kube-api-access-pv2f6\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:03.940298 kubelet[2834]: I0916 04:55:03.939922 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-hostproc" (OuterVolumeSpecName: "hostproc") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:55:03.940298 kubelet[2834]: I0916 04:55:03.939974 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:55:03.942904 kubelet[2834]: I0916 04:55:03.942858 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cni-path" (OuterVolumeSpecName: "cni-path") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:55:03.943151 kubelet[2834]: I0916 04:55:03.942913 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:55:03.943151 kubelet[2834]: I0916 04:55:03.942937 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:55:03.943151 kubelet[2834]: I0916 04:55:03.942959 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:55:03.943151 kubelet[2834]: I0916 04:55:03.942982 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:55:03.944392 kubelet[2834]: I0916 04:55:03.944312 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:55:03.944392 kubelet[2834]: I0916 04:55:03.944337 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:55:03.944392 kubelet[2834]: I0916 04:55:03.944361 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 16 04:55:03.948114 kubelet[2834]: I0916 04:55:03.948060 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/991ab742-1070-4287-bca7-0fce1631e07b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 16 04:55:03.948532 kubelet[2834]: I0916 04:55:03.948473 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991ab742-1070-4287-bca7-0fce1631e07b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 16 04:55:03.949349 kubelet[2834]: I0916 04:55:03.949301 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991ab742-1070-4287-bca7-0fce1631e07b-kube-api-access-p2kf9" (OuterVolumeSpecName: "kube-api-access-p2kf9") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "kube-api-access-p2kf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 04:55:03.950480 kubelet[2834]: I0916 04:55:03.950448 2834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991ab742-1070-4287-bca7-0fce1631e07b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "991ab742-1070-4287-bca7-0fce1631e07b" (UID: "991ab742-1070-4287-bca7-0fce1631e07b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 04:55:04.041006 kubelet[2834]: I0916 04:55:04.040939 2834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2kf9\" (UniqueName: \"kubernetes.io/projected/991ab742-1070-4287-bca7-0fce1631e07b-kube-api-access-p2kf9\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041006 kubelet[2834]: I0916 04:55:04.041003 2834 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/991ab742-1070-4287-bca7-0fce1631e07b-cilium-config-path\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041304 kubelet[2834]: I0916 04:55:04.041026 2834 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cilium-cgroup\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041304 kubelet[2834]: I0916 04:55:04.041046 2834 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/991ab742-1070-4287-bca7-0fce1631e07b-hubble-tls\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041304 kubelet[2834]: I0916 04:55:04.041064 2834 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-bpf-maps\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041304 kubelet[2834]: I0916 04:55:04.041083 2834 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-etc-cni-netd\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041304 kubelet[2834]: I0916 04:55:04.041100 2834 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/991ab742-1070-4287-bca7-0fce1631e07b-clustermesh-secrets\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041304 kubelet[2834]: I0916 04:55:04.041116 2834 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cni-path\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041304 kubelet[2834]: I0916 04:55:04.041140 2834 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-lib-modules\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041543 kubelet[2834]: I0916 04:55:04.041156 2834 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-hostproc\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041543 kubelet[2834]: I0916 04:55:04.041175 2834 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-cilium-run\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041543 kubelet[2834]: I0916 04:55:04.041193 2834 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-host-proc-sys-kernel\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041543 kubelet[2834]: I0916 04:55:04.041210 2834 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-host-proc-sys-net\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.041543 kubelet[2834]: I0916 04:55:04.041227 2834 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/991ab742-1070-4287-bca7-0fce1631e07b-xtables-lock\") on node \"ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7\" DevicePath \"\"" Sep 16 04:55:04.364825 kubelet[2834]: I0916 04:55:04.363529 2834 scope.go:117] "RemoveContainer" containerID="ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0" Sep 16 04:55:04.368011 containerd[1551]: time="2025-09-16T04:55:04.367862594Z" level=info msg="RemoveContainer for \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\"" Sep 16 04:55:04.380748 containerd[1551]: time="2025-09-16T04:55:04.380579542Z" level=info msg="RemoveContainer for \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\" returns successfully" Sep 16 04:55:04.381164 kubelet[2834]: I0916 04:55:04.381126 2834 scope.go:117] "RemoveContainer" containerID="ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0" Sep 16 04:55:04.381459 containerd[1551]: time="2025-09-16T04:55:04.381367210Z" level=error msg="ContainerStatus for \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\": not found" Sep 16 04:55:04.383136 kubelet[2834]: E0916 04:55:04.383006 2834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\": not found" containerID="ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0" Sep 16 04:55:04.383447 kubelet[2834]: I0916 04:55:04.383102 2834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0"} err="failed to get container status \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\": rpc error: code = NotFound desc = an error occurred when try to find container \"ebebb78e8c0d5d6e7d07044ce685bdb43758291d79852d9acaf742b3987cbdb0\": not found" Sep 16 04:55:04.383738 kubelet[2834]: I0916 04:55:04.383589 2834 scope.go:117] "RemoveContainer" containerID="8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61" Sep 16 04:55:04.387813 containerd[1551]: time="2025-09-16T04:55:04.387765943Z" level=info msg="RemoveContainer for \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\"" Sep 16 04:55:04.395744 systemd[1]: Removed slice kubepods-burstable-pod991ab742_1070_4287_bca7_0fce1631e07b.slice - libcontainer container kubepods-burstable-pod991ab742_1070_4287_bca7_0fce1631e07b.slice. Sep 16 04:55:04.395932 systemd[1]: kubepods-burstable-pod991ab742_1070_4287_bca7_0fce1631e07b.slice: Consumed 9.409s CPU time, 127.8M memory peak, 128K read from disk, 13.3M written to disk. Sep 16 04:55:04.403062 containerd[1551]: time="2025-09-16T04:55:04.402885082Z" level=info msg="RemoveContainer for \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" returns successfully" Sep 16 04:55:04.403625 kubelet[2834]: I0916 04:55:04.403301 2834 scope.go:117] "RemoveContainer" containerID="1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493" Sep 16 04:55:04.407233 containerd[1551]: time="2025-09-16T04:55:04.407178595Z" level=info msg="RemoveContainer for \"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\"" Sep 16 04:55:04.415250 containerd[1551]: time="2025-09-16T04:55:04.415185730Z" level=info msg="RemoveContainer for \"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\" returns successfully" Sep 16 04:55:04.415583 kubelet[2834]: I0916 04:55:04.415556 2834 scope.go:117] "RemoveContainer" containerID="2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09" Sep 16 04:55:04.419210 containerd[1551]: time="2025-09-16T04:55:04.419168625Z" level=info msg="RemoveContainer for \"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\"" Sep 16 04:55:04.428643 containerd[1551]: time="2025-09-16T04:55:04.427560547Z" level=info msg="RemoveContainer for \"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\" returns successfully" Sep 16 04:55:04.428806 kubelet[2834]: I0916 04:55:04.427923 2834 scope.go:117] "RemoveContainer" containerID="e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e" Sep 16 04:55:04.430355 containerd[1551]: time="2025-09-16T04:55:04.430306586Z" level=info msg="RemoveContainer for \"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\"" Sep 16 04:55:04.434425 containerd[1551]: time="2025-09-16T04:55:04.434364417Z" level=info msg="RemoveContainer for \"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\" returns successfully" Sep 16 04:55:04.434772 kubelet[2834]: I0916 04:55:04.434693 2834 scope.go:117] "RemoveContainer" containerID="dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223" Sep 16 04:55:04.436623 containerd[1551]: time="2025-09-16T04:55:04.436564705Z" level=info msg="RemoveContainer for \"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\"" Sep 16 04:55:04.440410 containerd[1551]: time="2025-09-16T04:55:04.440353230Z" level=info msg="RemoveContainer for \"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\" returns successfully" Sep 16 04:55:04.440734 kubelet[2834]: I0916 04:55:04.440560 2834 scope.go:117] "RemoveContainer" containerID="8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61" Sep 16 04:55:04.440930 containerd[1551]: time="2025-09-16T04:55:04.440862286Z" level=error msg="ContainerStatus for \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\": not found" Sep 16 04:55:04.441143 kubelet[2834]: E0916 04:55:04.441104 2834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\": not found" containerID="8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61" Sep 16 04:55:04.441266 kubelet[2834]: I0916 04:55:04.441147 2834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61"} err="failed to get container status \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a484a9ddfe8fbb7cdf06fab48321d50437eb813282eb0fb10566eaf23ff2f61\": not found" Sep 16 04:55:04.441266 kubelet[2834]: I0916 04:55:04.441178 2834 scope.go:117] "RemoveContainer" containerID="1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493" Sep 16 04:55:04.441457 containerd[1551]: time="2025-09-16T04:55:04.441388991Z" level=error msg="ContainerStatus for \"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\": not found" Sep 16 04:55:04.441587 kubelet[2834]: E0916 04:55:04.441543 2834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\": not found" containerID="1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493" Sep 16 04:55:04.441692 kubelet[2834]: I0916 04:55:04.441582 2834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493"} err="failed to get container status \"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e8c9a9b996a24a63ca3df5b149b5a74a0578beb1298def36ddde692d5dc2493\": not found" Sep 16 04:55:04.441692 kubelet[2834]: I0916 04:55:04.441640 2834 scope.go:117] "RemoveContainer" containerID="2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09" Sep 16 04:55:04.442011 containerd[1551]: time="2025-09-16T04:55:04.441933331Z" level=error msg="ContainerStatus for \"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\": not found" Sep 16 04:55:04.442217 kubelet[2834]: E0916 04:55:04.442169 2834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\": not found" containerID="2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09" Sep 16 04:55:04.442217 kubelet[2834]: I0916 04:55:04.442206 2834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09"} err="failed to get container status \"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b89d426b39b1f3d7ac855233ff7f4e287cce79d73f8294c46c86975ec865f09\": not found" Sep 16 04:55:04.442399 kubelet[2834]: I0916 04:55:04.442232 2834 scope.go:117] "RemoveContainer" containerID="e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e" Sep 16 04:55:04.442501 containerd[1551]: time="2025-09-16T04:55:04.442467174Z" level=error msg="ContainerStatus for \"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\": not found" Sep 16 04:55:04.442820 kubelet[2834]: E0916 04:55:04.442787 2834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\": not found" containerID="e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e" Sep 16 04:55:04.442961 kubelet[2834]: I0916 04:55:04.442831 2834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e"} err="failed to get container status \"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e69575f05ce5d7e1f9838e892ba2577383fed0f448772fe01539c7f91ce4ee3e\": not found" Sep 16 04:55:04.442961 kubelet[2834]: I0916 04:55:04.442855 2834 scope.go:117] "RemoveContainer" containerID="dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223" Sep 16 04:55:04.443149 containerd[1551]: time="2025-09-16T04:55:04.443060346Z" level=error msg="ContainerStatus for \"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\": not found" Sep 16 04:55:04.443333 kubelet[2834]: E0916 04:55:04.443197 2834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\": not found" containerID="dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223" Sep 16 04:55:04.443333 kubelet[2834]: I0916 04:55:04.443226 2834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223"} err="failed to get container status \"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\": rpc error: code = NotFound desc = an error occurred when try to find container \"dde51aefcbee04dd336b8961fbf2c8ddbf187466148223814eddbf9616e2c223\": not found" Sep 16 04:55:04.607322 systemd[1]: var-lib-kubelet-pods-8be2c115\x2d1037\x2d4329\x2da72f\x2dfb2f750de3a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpv2f6.mount: Deactivated successfully. Sep 16 04:55:04.607494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c345259ec8263f03f7270b7f15983df7989eed220b4b197f4ddbf3e0824ac5c-shm.mount: Deactivated successfully. Sep 16 04:55:04.607784 systemd[1]: var-lib-kubelet-pods-991ab742\x2d1070\x2d4287\x2dbca7\x2d0fce1631e07b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp2kf9.mount: Deactivated successfully. Sep 16 04:55:04.607940 systemd[1]: var-lib-kubelet-pods-991ab742\x2d1070\x2d4287\x2dbca7\x2d0fce1631e07b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 16 04:55:04.608052 systemd[1]: var-lib-kubelet-pods-991ab742\x2d1070\x2d4287\x2dbca7\x2d0fce1631e07b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 16 04:55:05.509691 sshd[4406]: Connection closed by 139.178.68.195 port 56578 Sep 16 04:55:05.510761 sshd-session[4403]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:05.517175 systemd[1]: sshd@25-10.128.0.59:22-139.178.68.195:56578.service: Deactivated successfully. Sep 16 04:55:05.520878 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 04:55:05.521357 systemd[1]: session-25.scope: Consumed 1.415s CPU time, 23.9M memory peak. Sep 16 04:55:05.522953 systemd-logind[1532]: Session 25 logged out. Waiting for processes to exit. Sep 16 04:55:05.525681 systemd-logind[1532]: Removed session 25. Sep 16 04:55:05.562919 systemd[1]: Started sshd@26-10.128.0.59:22-139.178.68.195:56588.service - OpenSSH per-connection server daemon (139.178.68.195:56588). Sep 16 04:55:05.866720 sshd[4556]: Accepted publickey for core from 139.178.68.195 port 56588 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:55:05.868546 sshd-session[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:05.876590 systemd-logind[1532]: New session 26 of user core. Sep 16 04:55:05.886836 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 04:55:05.915055 kubelet[2834]: I0916 04:55:05.915005 2834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be2c115-1037-4329-a72f-fb2f750de3a3" path="/var/lib/kubelet/pods/8be2c115-1037-4329-a72f-fb2f750de3a3/volumes" Sep 16 04:55:05.915808 kubelet[2834]: I0916 04:55:05.915772 2834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="991ab742-1070-4287-bca7-0fce1631e07b" path="/var/lib/kubelet/pods/991ab742-1070-4287-bca7-0fce1631e07b/volumes" Sep 16 04:55:06.058927 kubelet[2834]: E0916 04:55:06.058863 2834 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 04:55:06.090638 ntpd[1640]: Deleting 10 lxc_health, [fe80::d839:9aff:fea4:5087%8]:123, stats: received=0, sent=0, dropped=0, active_time=99 secs Sep 16 04:55:06.091379 ntpd[1640]: 16 Sep 04:55:06 ntpd[1640]: Deleting 10 lxc_health, [fe80::d839:9aff:fea4:5087%8]:123, stats: received=0, sent=0, dropped=0, active_time=99 secs Sep 16 04:55:06.591017 kubelet[2834]: E0916 04:55:06.589964 2834 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="991ab742-1070-4287-bca7-0fce1631e07b" containerName="cilium-agent" Sep 16 04:55:06.591017 kubelet[2834]: E0916 04:55:06.590012 2834 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="991ab742-1070-4287-bca7-0fce1631e07b" containerName="mount-cgroup" Sep 16 04:55:06.591017 kubelet[2834]: E0916 04:55:06.590025 2834 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="991ab742-1070-4287-bca7-0fce1631e07b" containerName="mount-bpf-fs" Sep 16 04:55:06.591017 kubelet[2834]: E0916 04:55:06.590036 2834 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8be2c115-1037-4329-a72f-fb2f750de3a3" containerName="cilium-operator" Sep 16 04:55:06.591017 kubelet[2834]: E0916 04:55:06.590049 2834 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="991ab742-1070-4287-bca7-0fce1631e07b" containerName="apply-sysctl-overwrites" Sep 16 04:55:06.591017 kubelet[2834]: E0916 04:55:06.590061 2834 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="991ab742-1070-4287-bca7-0fce1631e07b" containerName="clean-cilium-state" Sep 16 04:55:06.591017 kubelet[2834]: I0916 04:55:06.590097 2834 memory_manager.go:354] "RemoveStaleState removing state" podUID="991ab742-1070-4287-bca7-0fce1631e07b" containerName="cilium-agent" Sep 16 04:55:06.591017 kubelet[2834]: I0916 04:55:06.590109 2834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be2c115-1037-4329-a72f-fb2f750de3a3" containerName="cilium-operator" Sep 16 04:55:06.604644 sshd[4559]: Connection closed by 139.178.68.195 port 56588 Sep 16 04:55:06.605906 sshd-session[4556]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:06.607758 systemd[1]: Created slice kubepods-burstable-poda988ebb7_68d2_40da_a8b1_1829045159da.slice - libcontainer container kubepods-burstable-poda988ebb7_68d2_40da_a8b1_1829045159da.slice. Sep 16 04:55:06.627155 systemd[1]: sshd@26-10.128.0.59:22-139.178.68.195:56588.service: Deactivated successfully. Sep 16 04:55:06.632975 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 04:55:06.637879 systemd-logind[1532]: Session 26 logged out. Waiting for processes to exit. Sep 16 04:55:06.643844 systemd-logind[1532]: Removed session 26. Sep 16 04:55:06.659236 kubelet[2834]: I0916 04:55:06.658756 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a988ebb7-68d2-40da-a8b1-1829045159da-cilium-config-path\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.659236 kubelet[2834]: I0916 04:55:06.658833 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzpbg\" (UniqueName: \"kubernetes.io/projected/a988ebb7-68d2-40da-a8b1-1829045159da-kube-api-access-fzpbg\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.659236 kubelet[2834]: I0916 04:55:06.658873 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a988ebb7-68d2-40da-a8b1-1829045159da-etc-cni-netd\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.659236 kubelet[2834]: I0916 04:55:06.658905 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a988ebb7-68d2-40da-a8b1-1829045159da-cilium-ipsec-secrets\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.659236 kubelet[2834]: I0916 04:55:06.658935 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a988ebb7-68d2-40da-a8b1-1829045159da-bpf-maps\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.659236 kubelet[2834]: I0916 04:55:06.658962 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a988ebb7-68d2-40da-a8b1-1829045159da-cni-path\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.659703 kubelet[2834]: I0916 04:55:06.658987 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a988ebb7-68d2-40da-a8b1-1829045159da-xtables-lock\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.659703 kubelet[2834]: I0916 04:55:06.659018 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a988ebb7-68d2-40da-a8b1-1829045159da-cilium-run\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.659703 kubelet[2834]: I0916 04:55:06.659044 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a988ebb7-68d2-40da-a8b1-1829045159da-clustermesh-secrets\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.659703 kubelet[2834]: I0916 04:55:06.659103 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a988ebb7-68d2-40da-a8b1-1829045159da-hostproc\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.659703 kubelet[2834]: I0916 04:55:06.659169 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a988ebb7-68d2-40da-a8b1-1829045159da-host-proc-sys-kernel\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.659703 kubelet[2834]: I0916 04:55:06.659229 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a988ebb7-68d2-40da-a8b1-1829045159da-hubble-tls\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.660000 kubelet[2834]: I0916 04:55:06.659255 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a988ebb7-68d2-40da-a8b1-1829045159da-cilium-cgroup\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.660000 kubelet[2834]: I0916 04:55:06.659307 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a988ebb7-68d2-40da-a8b1-1829045159da-lib-modules\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.660000 kubelet[2834]: I0916 04:55:06.659334 2834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a988ebb7-68d2-40da-a8b1-1829045159da-host-proc-sys-net\") pod \"cilium-v8n67\" (UID: \"a988ebb7-68d2-40da-a8b1-1829045159da\") " pod="kube-system/cilium-v8n67" Sep 16 04:55:06.673016 systemd[1]: Started sshd@27-10.128.0.59:22-139.178.68.195:56600.service - OpenSSH per-connection server daemon (139.178.68.195:56600). Sep 16 04:55:06.920885 containerd[1551]: time="2025-09-16T04:55:06.920700711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v8n67,Uid:a988ebb7-68d2-40da-a8b1-1829045159da,Namespace:kube-system,Attempt:0,}" Sep 16 04:55:06.948479 containerd[1551]: time="2025-09-16T04:55:06.948396395Z" level=info msg="connecting to shim 3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed" address="unix:///run/containerd/s/9b9945c754e1fffa35d680defc7843d5f3fe6e8a6dccdff3c0a3fe9fe4dd9f98" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:55:06.991844 systemd[1]: Started cri-containerd-3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed.scope - libcontainer container 3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed. Sep 16 04:55:07.007410 sshd[4570]: Accepted publickey for core from 139.178.68.195 port 56600 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:55:07.009719 sshd-session[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:07.020156 systemd-logind[1532]: New session 27 of user core. Sep 16 04:55:07.026077 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 16 04:55:07.049054 containerd[1551]: time="2025-09-16T04:55:07.048954822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v8n67,Uid:a988ebb7-68d2-40da-a8b1-1829045159da,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed\"" Sep 16 04:55:07.053867 containerd[1551]: time="2025-09-16T04:55:07.053813577Z" level=info msg="CreateContainer within sandbox \"3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:55:07.063637 containerd[1551]: time="2025-09-16T04:55:07.063540758Z" level=info msg="Container 94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:07.080968 containerd[1551]: time="2025-09-16T04:55:07.080903510Z" level=info msg="CreateContainer within sandbox \"3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706\"" Sep 16 04:55:07.082218 containerd[1551]: time="2025-09-16T04:55:07.082170657Z" level=info msg="StartContainer for \"94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706\"" Sep 16 04:55:07.083693 containerd[1551]: time="2025-09-16T04:55:07.083647872Z" level=info msg="connecting to shim 94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706" address="unix:///run/containerd/s/9b9945c754e1fffa35d680defc7843d5f3fe6e8a6dccdff3c0a3fe9fe4dd9f98" protocol=ttrpc version=3 Sep 16 04:55:07.109855 systemd[1]: Started cri-containerd-94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706.scope - libcontainer container 94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706. Sep 16 04:55:07.153735 containerd[1551]: time="2025-09-16T04:55:07.153627904Z" level=info msg="StartContainer for \"94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706\" returns successfully" Sep 16 04:55:07.164869 systemd[1]: cri-containerd-94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706.scope: Deactivated successfully. Sep 16 04:55:07.169640 containerd[1551]: time="2025-09-16T04:55:07.169078697Z" level=info msg="received exit event container_id:\"94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706\" id:\"94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706\" pid:4636 exited_at:{seconds:1757998507 nanos:168311322}" Sep 16 04:55:07.169819 containerd[1551]: time="2025-09-16T04:55:07.169461137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706\" id:\"94498ddda3410c52dbbb2dcb73c93d997dd305e4f6e712772890103ad24b3706\" pid:4636 exited_at:{seconds:1757998507 nanos:168311322}" Sep 16 04:55:07.218782 sshd[4615]: Connection closed by 139.178.68.195 port 56600 Sep 16 04:55:07.220089 sshd-session[4570]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:07.226323 systemd[1]: sshd@27-10.128.0.59:22-139.178.68.195:56600.service: Deactivated successfully. Sep 16 04:55:07.229386 systemd[1]: session-27.scope: Deactivated successfully. Sep 16 04:55:07.232907 systemd-logind[1532]: Session 27 logged out. Waiting for processes to exit. Sep 16 04:55:07.234981 systemd-logind[1532]: Removed session 27. Sep 16 04:55:07.274652 systemd[1]: Started sshd@28-10.128.0.59:22-139.178.68.195:56614.service - OpenSSH per-connection server daemon (139.178.68.195:56614). Sep 16 04:55:07.395667 containerd[1551]: time="2025-09-16T04:55:07.394157418Z" level=info msg="CreateContainer within sandbox \"3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:55:07.408470 containerd[1551]: time="2025-09-16T04:55:07.408420195Z" level=info msg="Container c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:07.417806 containerd[1551]: time="2025-09-16T04:55:07.417740493Z" level=info msg="CreateContainer within sandbox \"3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d\"" Sep 16 04:55:07.418829 containerd[1551]: time="2025-09-16T04:55:07.418770735Z" level=info msg="StartContainer for \"c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d\"" Sep 16 04:55:07.420543 containerd[1551]: time="2025-09-16T04:55:07.420498553Z" level=info msg="connecting to shim c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d" address="unix:///run/containerd/s/9b9945c754e1fffa35d680defc7843d5f3fe6e8a6dccdff3c0a3fe9fe4dd9f98" protocol=ttrpc version=3 Sep 16 04:55:07.450906 systemd[1]: Started cri-containerd-c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d.scope - libcontainer container c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d. Sep 16 04:55:07.499846 containerd[1551]: time="2025-09-16T04:55:07.499705423Z" level=info msg="StartContainer for \"c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d\" returns successfully" Sep 16 04:55:07.516349 systemd[1]: cri-containerd-c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d.scope: Deactivated successfully. Sep 16 04:55:07.519720 containerd[1551]: time="2025-09-16T04:55:07.519663904Z" level=info msg="received exit event container_id:\"c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d\" id:\"c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d\" pid:4689 exited_at:{seconds:1757998507 nanos:519057452}" Sep 16 04:55:07.520137 containerd[1551]: time="2025-09-16T04:55:07.520051879Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d\" id:\"c0ed501967d65e2e8717cf5253dcfc2d66854c04b08bdc8a5c64aef0b2fd2d6d\" pid:4689 exited_at:{seconds:1757998507 nanos:519057452}" Sep 16 04:55:07.599682 sshd[4674]: Accepted publickey for core from 139.178.68.195 port 56614 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 04:55:07.601134 sshd-session[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:07.608701 systemd-logind[1532]: New session 28 of user core. Sep 16 04:55:07.616823 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 16 04:55:08.400759 containerd[1551]: time="2025-09-16T04:55:08.400698840Z" level=info msg="CreateContainer within sandbox \"3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:55:08.426338 containerd[1551]: time="2025-09-16T04:55:08.421772272Z" level=info msg="Container 13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:08.433519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3922034971.mount: Deactivated successfully. Sep 16 04:55:08.440358 containerd[1551]: time="2025-09-16T04:55:08.440286086Z" level=info msg="CreateContainer within sandbox \"3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983\"" Sep 16 04:55:08.441651 containerd[1551]: time="2025-09-16T04:55:08.441181381Z" level=info msg="StartContainer for \"13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983\"" Sep 16 04:55:08.443751 containerd[1551]: time="2025-09-16T04:55:08.443676875Z" level=info msg="connecting to shim 13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983" address="unix:///run/containerd/s/9b9945c754e1fffa35d680defc7843d5f3fe6e8a6dccdff3c0a3fe9fe4dd9f98" protocol=ttrpc version=3 Sep 16 04:55:08.479876 systemd[1]: Started cri-containerd-13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983.scope - libcontainer container 13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983. Sep 16 04:55:08.543303 systemd[1]: cri-containerd-13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983.scope: Deactivated successfully. Sep 16 04:55:08.545654 containerd[1551]: time="2025-09-16T04:55:08.544744725Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983\" id:\"13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983\" pid:4743 exited_at:{seconds:1757998508 nanos:544263388}" Sep 16 04:55:08.545654 containerd[1551]: time="2025-09-16T04:55:08.545430343Z" level=info msg="received exit event container_id:\"13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983\" id:\"13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983\" pid:4743 exited_at:{seconds:1757998508 nanos:544263388}" Sep 16 04:55:08.546411 containerd[1551]: time="2025-09-16T04:55:08.546353301Z" level=info msg="StartContainer for \"13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983\" returns successfully" Sep 16 04:55:08.582173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13972ff5ccb18a8302ef45930c0effff16a6660787c0ae1c497fe2769fde9983-rootfs.mount: Deactivated successfully. Sep 16 04:55:09.113451 kubelet[2834]: I0916 04:55:09.113361 2834 setters.go:600] "Node became not ready" node="ci-4459-0-0-nightly-20250915-2100-4297d38a767f187a7ad7" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-16T04:55:09Z","lastTransitionTime":"2025-09-16T04:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 16 04:55:09.414292 containerd[1551]: time="2025-09-16T04:55:09.413940620Z" level=info msg="CreateContainer within sandbox \"3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:55:09.433182 containerd[1551]: time="2025-09-16T04:55:09.433131199Z" level=info msg="Container 74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:09.448559 containerd[1551]: time="2025-09-16T04:55:09.448485402Z" level=info msg="CreateContainer within sandbox \"3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1\"" Sep 16 04:55:09.449724 containerd[1551]: time="2025-09-16T04:55:09.449663635Z" level=info msg="StartContainer for \"74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1\"" Sep 16 04:55:09.451480 containerd[1551]: time="2025-09-16T04:55:09.451429238Z" level=info msg="connecting to shim 74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1" address="unix:///run/containerd/s/9b9945c754e1fffa35d680defc7843d5f3fe6e8a6dccdff3c0a3fe9fe4dd9f98" protocol=ttrpc version=3 Sep 16 04:55:09.500979 systemd[1]: Started cri-containerd-74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1.scope - libcontainer container 74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1. Sep 16 04:55:09.543375 systemd[1]: cri-containerd-74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1.scope: Deactivated successfully. Sep 16 04:55:09.546741 containerd[1551]: time="2025-09-16T04:55:09.546580232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1\" id:\"74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1\" pid:4783 exited_at:{seconds:1757998509 nanos:543685391}" Sep 16 04:55:09.548117 containerd[1551]: time="2025-09-16T04:55:09.548067081Z" level=info msg="received exit event container_id:\"74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1\" id:\"74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1\" pid:4783 exited_at:{seconds:1757998509 nanos:543685391}" Sep 16 04:55:09.561086 containerd[1551]: time="2025-09-16T04:55:09.561039248Z" level=info msg="StartContainer for \"74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1\" returns successfully" Sep 16 04:55:09.587943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74e88986e7217baf79bc85dd5ea0e29fc4cf0dea6dbae420a5698c2b15c94ec1-rootfs.mount: Deactivated successfully. Sep 16 04:55:10.415466 containerd[1551]: time="2025-09-16T04:55:10.415332649Z" level=info msg="CreateContainer within sandbox \"3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:55:10.436743 containerd[1551]: time="2025-09-16T04:55:10.436676822Z" level=info msg="Container 839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:10.452026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710108793.mount: Deactivated successfully. Sep 16 04:55:10.458430 containerd[1551]: time="2025-09-16T04:55:10.458363275Z" level=info msg="CreateContainer within sandbox \"3f984f231988c26171385c3681f9cb3ba5b35cb7411c79b64ca9e189a4e4f9ed\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf\"" Sep 16 04:55:10.459422 containerd[1551]: time="2025-09-16T04:55:10.459370884Z" level=info msg="StartContainer for \"839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf\"" Sep 16 04:55:10.461318 containerd[1551]: time="2025-09-16T04:55:10.461275018Z" level=info msg="connecting to shim 839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf" address="unix:///run/containerd/s/9b9945c754e1fffa35d680defc7843d5f3fe6e8a6dccdff3c0a3fe9fe4dd9f98" protocol=ttrpc version=3 Sep 16 04:55:10.496844 systemd[1]: Started cri-containerd-839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf.scope - libcontainer container 839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf. Sep 16 04:55:10.549673 containerd[1551]: time="2025-09-16T04:55:10.549624288Z" level=info msg="StartContainer for \"839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf\" returns successfully" Sep 16 04:55:10.662494 containerd[1551]: time="2025-09-16T04:55:10.662427221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf\" id:\"fd35fa4521e8c61bd6b3012ef8ad22cb8deeecc054091bca2fe9f282c189ca59\" pid:4849 exited_at:{seconds:1757998510 nanos:661948603}" Sep 16 04:55:11.091664 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 16 04:55:12.024517 containerd[1551]: time="2025-09-16T04:55:12.024461186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf\" id:\"fe73f06622a6bf30594e59ac39f8253d5f2e99fc7b947ace29875fde7949fdfe\" pid:4924 exit_status:1 exited_at:{seconds:1757998512 nanos:23908207}" Sep 16 04:55:14.289672 containerd[1551]: time="2025-09-16T04:55:14.289594868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf\" id:\"80e0e7837892e009ebb86e0add5bb745dbf69cdf60d6276f3c885a75fec1f158\" pid:5306 exit_status:1 exited_at:{seconds:1757998514 nanos:287946746}" Sep 16 04:55:14.294354 kubelet[2834]: E0916 04:55:14.294306 2834 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41678->127.0.0.1:34101: write tcp 127.0.0.1:41678->127.0.0.1:34101: write: broken pipe Sep 16 04:55:14.477377 systemd-networkd[1440]: lxc_health: Link UP Sep 16 04:55:14.480576 systemd-networkd[1440]: lxc_health: Gained carrier Sep 16 04:55:14.957524 kubelet[2834]: I0916 04:55:14.957101 2834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v8n67" podStartSLOduration=8.957072272 podStartE2EDuration="8.957072272s" podCreationTimestamp="2025-09-16 04:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:55:11.449241685 +0000 UTC m=+135.751893460" watchObservedRunningTime="2025-09-16 04:55:14.957072272 +0000 UTC m=+139.259724048" Sep 16 04:55:15.784821 systemd-networkd[1440]: lxc_health: Gained IPv6LL Sep 16 04:55:16.641054 containerd[1551]: time="2025-09-16T04:55:16.640989662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf\" id:\"d59f5dbb50b1c2d36928f12abcefa522fe26ed01546a1e44429cf20abb03021a\" pid:5402 exited_at:{seconds:1757998516 nanos:640016055}" Sep 16 04:55:18.091233 ntpd[1640]: Listen normally on 13 lxc_health [fe80::a825:aff:fe29:46e9%14]:123 Sep 16 04:55:18.091941 ntpd[1640]: 16 Sep 04:55:18 ntpd[1640]: Listen normally on 13 lxc_health [fe80::a825:aff:fe29:46e9%14]:123 Sep 16 04:55:18.910021 containerd[1551]: time="2025-09-16T04:55:18.909968647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf\" id:\"29aaaaa9b166960bbf5077d67a5ae61799de24aa6b3c9081e443caeee677bc7f\" pid:5435 exited_at:{seconds:1757998518 nanos:909159526}" Sep 16 04:55:21.069025 containerd[1551]: time="2025-09-16T04:55:21.068956373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf\" id:\"f14a23d9a715974756732c4e26abc7357338397fd3bb9dadf8ef3dd965829160\" pid:5464 exited_at:{seconds:1757998521 nanos:67587467}" Sep 16 04:55:23.275869 containerd[1551]: time="2025-09-16T04:55:23.275815276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"839fa7b2d4fc6444440f9931cdf4e646e10ea5401a7ae438978d0899e2736ddf\" id:\"73ea9e8d2c00cd0c5cbaf661b7fdf90037244dd2fe1c6fff46068ec88e65e6a6\" pid:5487 exited_at:{seconds:1757998523 nanos:274594901}" Sep 16 04:55:23.323563 sshd[4723]: Connection closed by 139.178.68.195 port 56614 Sep 16 04:55:23.324567 sshd-session[4674]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:23.329890 systemd[1]: sshd@28-10.128.0.59:22-139.178.68.195:56614.service: Deactivated successfully. Sep 16 04:55:23.333637 systemd[1]: session-28.scope: Deactivated successfully. Sep 16 04:55:23.335734 systemd-logind[1532]: Session 28 logged out. Waiting for processes to exit. Sep 16 04:55:23.338591 systemd-logind[1532]: Removed session 28.