Sep 4 23:45:24.170933 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:03:18 -00 2025 Sep 4 23:45:24.171132 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:45:24.171152 kernel: BIOS-provided physical RAM map: Sep 4 23:45:24.171289 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 4 23:45:24.171303 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 4 23:45:24.171318 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 4 23:45:24.171334 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 4 23:45:24.171350 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 4 23:45:24.171370 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd329fff] usable Sep 4 23:45:24.171385 kernel: BIOS-e820: [mem 0x00000000bd32a000-0x00000000bd331fff] ACPI data Sep 4 23:45:24.171401 kernel: BIOS-e820: [mem 0x00000000bd332000-0x00000000bf8ecfff] usable Sep 4 23:45:24.171416 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 4 23:45:24.171431 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 4 23:45:24.171446 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 4 23:45:24.171470 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 4 23:45:24.171487 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 4 23:45:24.171504 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 4 23:45:24.171520 kernel: NX (Execute Disable) protection: active Sep 4 23:45:24.171537 kernel: APIC: Static calls initialized Sep 4 23:45:24.171553 kernel: efi: EFI v2.7 by EDK II Sep 4 23:45:24.171578 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32a018 Sep 4 23:45:24.171595 kernel: random: crng init done Sep 4 23:45:24.171611 kernel: secureboot: Secure boot disabled Sep 4 23:45:24.171627 kernel: SMBIOS 2.4 present. Sep 4 23:45:24.171649 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 4 23:45:24.171665 kernel: Hypervisor detected: KVM Sep 4 23:45:24.171682 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 23:45:24.171698 kernel: kvm-clock: using sched offset of 13498629085 cycles Sep 4 23:45:24.171717 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 23:45:24.171734 kernel: tsc: Detected 2299.998 MHz processor Sep 4 23:45:24.171751 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 23:45:24.171768 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 23:45:24.171783 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 4 23:45:24.171801 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 4 23:45:24.171824 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 23:45:24.171841 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 4 23:45:24.171858 kernel: Using GB pages for direct mapping Sep 4 23:45:24.171876 kernel: ACPI: Early table checksum verification disabled Sep 4 23:45:24.174168 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 4 23:45:24.174201 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 4 23:45:24.174232 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 4 23:45:24.174258 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 4 23:45:24.174287 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 4 23:45:24.174306 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 4 23:45:24.174325 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 4 23:45:24.174344 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 4 23:45:24.174368 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 4 23:45:24.174387 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 4 23:45:24.174411 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 4 23:45:24.174429 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 4 23:45:24.174448 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 4 23:45:24.174467 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 4 23:45:24.174485 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 4 23:45:24.174503 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 4 23:45:24.174521 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 4 23:45:24.174540 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 4 23:45:24.174564 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 4 23:45:24.174590 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 4 23:45:24.174608 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 23:45:24.174626 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 23:45:24.174644 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 4 23:45:24.174662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 4 23:45:24.174680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 4 23:45:24.174699 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 4 23:45:24.174718 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 4 23:45:24.174742 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Sep 4 23:45:24.174762 kernel: Zone ranges: Sep 4 23:45:24.174780 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 23:45:24.174800 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 4 23:45:24.174818 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 4 23:45:24.174837 kernel: Movable zone start for each node Sep 4 23:45:24.174855 kernel: Early memory node ranges Sep 4 23:45:24.174873 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 4 23:45:24.174909 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 4 23:45:24.174927 kernel: node 0: [mem 0x0000000000100000-0x00000000bd329fff] Sep 4 23:45:24.174950 kernel: node 0: [mem 0x00000000bd332000-0x00000000bf8ecfff] Sep 4 23:45:24.174968 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 4 23:45:24.174986 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 4 23:45:24.175004 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 4 23:45:24.175022 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 23:45:24.175041 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 4 23:45:24.175059 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 4 23:45:24.175078 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Sep 4 23:45:24.175096 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 4 23:45:24.175121 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 4 23:45:24.175139 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 23:45:24.175158 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 23:45:24.175176 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 23:45:24.175194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 23:45:24.175213 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 23:45:24.175232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 23:45:24.175250 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 23:45:24.175269 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 23:45:24.175293 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 23:45:24.175311 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 4 23:45:24.175329 kernel: Booting paravirtualized kernel on KVM Sep 4 23:45:24.175347 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 23:45:24.175366 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 23:45:24.175384 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 4 23:45:24.175408 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 4 23:45:24.175427 kernel: pcpu-alloc: [0] 0 1 Sep 4 23:45:24.175444 kernel: kvm-guest: PV spinlocks enabled Sep 4 23:45:24.175467 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 23:45:24.175489 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:45:24.175508 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:45:24.175527 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 4 23:45:24.175547 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:45:24.175565 kernel: Fallback order for Node 0: 0 Sep 4 23:45:24.175590 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Sep 4 23:45:24.175609 kernel: Policy zone: Normal Sep 4 23:45:24.175632 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:45:24.175650 kernel: software IO TLB: area num 2. Sep 4 23:45:24.175670 kernel: Memory: 7511324K/7860552K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 348972K reserved, 0K cma-reserved) Sep 4 23:45:24.175688 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:45:24.175706 kernel: Kernel/User page tables isolation: enabled Sep 4 23:45:24.175725 kernel: ftrace: allocating 37943 entries in 149 pages Sep 4 23:45:24.175743 kernel: ftrace: allocated 149 pages with 4 groups Sep 4 23:45:24.175762 kernel: Dynamic Preempt: voluntary Sep 4 23:45:24.175803 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:45:24.175824 kernel: rcu: RCU event tracing is enabled. Sep 4 23:45:24.175844 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:45:24.175864 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:45:24.176731 kernel: Rude variant of Tasks RCU enabled. Sep 4 23:45:24.176760 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:45:24.176776 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:45:24.176793 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:45:24.176810 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 23:45:24.177052 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:45:24.177222 kernel: Console: colour dummy device 80x25 Sep 4 23:45:24.177239 kernel: printk: console [ttyS0] enabled Sep 4 23:45:24.177257 kernel: ACPI: Core revision 20230628 Sep 4 23:45:24.177274 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 23:45:24.177428 kernel: x2apic enabled Sep 4 23:45:24.177448 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 23:45:24.177467 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 4 23:45:24.177486 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 4 23:45:24.177513 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 4 23:45:24.177532 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 4 23:45:24.177695 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 4 23:45:24.177716 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 23:45:24.177735 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 4 23:45:24.177755 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 4 23:45:24.177928 kernel: Spectre V2 : Mitigation: IBRS Sep 4 23:45:24.177949 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 23:45:24.177969 kernel: RETBleed: Mitigation: IBRS Sep 4 23:45:24.177995 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 23:45:24.178141 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 4 23:45:24.178162 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 23:45:24.178181 kernel: MDS: Mitigation: Clear CPU buffers Sep 4 23:45:24.178201 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 23:45:24.178221 kernel: active return thunk: its_return_thunk Sep 4 23:45:24.178240 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 4 23:45:24.178260 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 23:45:24.178280 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 23:45:24.178305 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 23:45:24.178324 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 23:45:24.178343 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 4 23:45:24.178362 kernel: Freeing SMP alternatives memory: 32K Sep 4 23:45:24.178383 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:45:24.178403 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:45:24.178423 kernel: landlock: Up and running. Sep 4 23:45:24.178442 kernel: SELinux: Initializing. Sep 4 23:45:24.178462 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 23:45:24.178489 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 23:45:24.178509 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 4 23:45:24.178529 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:45:24.178549 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:45:24.178569 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:45:24.178597 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 4 23:45:24.178617 kernel: signal: max sigframe size: 1776 Sep 4 23:45:24.178636 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:45:24.178662 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:45:24.178682 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 23:45:24.178701 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:45:24.178721 kernel: smpboot: x86: Booting SMP configuration: Sep 4 23:45:24.178741 kernel: .... node #0, CPUs: #1 Sep 4 23:45:24.178762 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 23:45:24.178784 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 23:45:24.178804 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:45:24.178829 kernel: smpboot: Max logical packages: 1 Sep 4 23:45:24.178849 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 4 23:45:24.178869 kernel: devtmpfs: initialized Sep 4 23:45:24.178901 kernel: x86/mm: Memory block size: 128MB Sep 4 23:45:24.178933 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 4 23:45:24.178954 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:45:24.178974 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:45:24.178994 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:45:24.179014 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:45:24.179039 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:45:24.179058 kernel: audit: type=2000 audit(1757029522.586:1): state=initialized audit_enabled=0 res=1 Sep 4 23:45:24.179077 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:45:24.179097 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 23:45:24.179117 kernel: cpuidle: using governor menu Sep 4 23:45:24.179137 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:45:24.179157 kernel: dca service started, version 1.12.1 Sep 4 23:45:24.179177 kernel: PCI: Using configuration type 1 for base access Sep 4 23:45:24.179196 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 23:45:24.179221 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:45:24.179241 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:45:24.179260 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:45:24.179280 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:45:24.179300 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:45:24.179320 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:45:24.179339 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:45:24.179358 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 23:45:24.179379 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 23:45:24.179403 kernel: ACPI: Interpreter enabled Sep 4 23:45:24.179422 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 23:45:24.179442 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 23:45:24.179461 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 23:45:24.179481 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 4 23:45:24.179501 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 4 23:45:24.179521 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 23:45:24.179934 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:45:24.180181 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 23:45:24.180391 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 23:45:24.180416 kernel: PCI host bridge to bus 0000:00 Sep 4 23:45:24.180636 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 23:45:24.180828 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 23:45:24.182669 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 23:45:24.182873 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 4 23:45:24.183093 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 23:45:24.183314 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 23:45:24.183533 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 4 23:45:24.183743 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 23:45:24.187615 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 23:45:24.187869 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 4 23:45:24.188100 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 4 23:45:24.188291 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 4 23:45:24.188491 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 4 23:45:24.188690 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 4 23:45:24.188878 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 4 23:45:24.189099 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 23:45:24.189290 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 23:45:24.189487 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 4 23:45:24.189510 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 23:45:24.189530 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 23:45:24.189549 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 23:45:24.189568 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 23:45:24.189594 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 23:45:24.189613 kernel: iommu: Default domain type: Translated Sep 4 23:45:24.189632 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 23:45:24.189651 kernel: efivars: Registered efivars operations Sep 4 23:45:24.189675 kernel: PCI: Using ACPI for IRQ routing Sep 4 23:45:24.189694 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 23:45:24.189713 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 4 23:45:24.189732 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 4 23:45:24.189747 kernel: e820: reserve RAM buffer [mem 0xbd32a000-0xbfffffff] Sep 4 23:45:24.189763 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 4 23:45:24.189777 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 4 23:45:24.189793 kernel: vgaarb: loaded Sep 4 23:45:24.189809 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 23:45:24.189829 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:45:24.189847 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:45:24.189867 kernel: pnp: PnP ACPI init Sep 4 23:45:24.191919 kernel: pnp: PnP ACPI: found 7 devices Sep 4 23:45:24.191961 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 23:45:24.191980 kernel: NET: Registered PF_INET protocol family Sep 4 23:45:24.191998 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 23:45:24.192017 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 4 23:45:24.192036 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:45:24.192062 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:45:24.192080 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 4 23:45:24.192099 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 4 23:45:24.192117 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 23:45:24.192136 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 4 23:45:24.192154 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:45:24.192173 kernel: NET: Registered PF_XDP protocol family Sep 4 23:45:24.192420 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 23:45:24.192627 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 23:45:24.192822 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 23:45:24.193017 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 4 23:45:24.193231 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 23:45:24.193256 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:45:24.193274 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 4 23:45:24.193291 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 4 23:45:24.193317 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 23:45:24.193336 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 4 23:45:24.193354 kernel: clocksource: Switched to clocksource tsc Sep 4 23:45:24.193371 kernel: Initialise system trusted keyrings Sep 4 23:45:24.193390 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 4 23:45:24.193408 kernel: Key type asymmetric registered Sep 4 23:45:24.193427 kernel: Asymmetric key parser 'x509' registered Sep 4 23:45:24.193447 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 23:45:24.193463 kernel: io scheduler mq-deadline registered Sep 4 23:45:24.193486 kernel: io scheduler kyber registered Sep 4 23:45:24.193502 kernel: io scheduler bfq registered Sep 4 23:45:24.193527 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 23:45:24.193546 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 23:45:24.193767 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 4 23:45:24.193791 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 23:45:24.196104 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 4 23:45:24.196150 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 23:45:24.196358 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 4 23:45:24.196392 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:45:24.196413 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 23:45:24.196433 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 4 23:45:24.196453 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 4 23:45:24.196473 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 4 23:45:24.196691 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 4 23:45:24.196720 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 23:45:24.196740 kernel: i8042: Warning: Keylock active Sep 4 23:45:24.196765 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 23:45:24.196784 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 23:45:24.197026 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 23:45:24.197210 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 23:45:24.197408 kernel: rtc_cmos 00:00: setting system clock to 2025-09-04T23:45:23 UTC (1757029523) Sep 4 23:45:24.197594 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 23:45:24.197619 kernel: intel_pstate: CPU model not supported Sep 4 23:45:24.197640 kernel: pstore: Using crash dump compression: deflate Sep 4 23:45:24.197667 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 23:45:24.197686 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:45:24.197706 kernel: Segment Routing with IPv6 Sep 4 23:45:24.197726 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:45:24.197746 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:45:24.197766 kernel: Key type dns_resolver registered Sep 4 23:45:24.197785 kernel: IPI shorthand broadcast: enabled Sep 4 23:45:24.197805 kernel: sched_clock: Marking stable (1046005928, 173781118)->(1253547024, -33759978) Sep 4 23:45:24.197825 kernel: registered taskstats version 1 Sep 4 23:45:24.197849 kernel: Loading compiled-in X.509 certificates Sep 4 23:45:24.197869 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: f395d469db1520f53594f6c4948c5f8002e6cc8b' Sep 4 23:45:24.200014 kernel: Key type .fscrypt registered Sep 4 23:45:24.200051 kernel: Key type fscrypt-provisioning registered Sep 4 23:45:24.200071 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:45:24.200090 kernel: ima: No architecture policies found Sep 4 23:45:24.200110 kernel: clk: Disabling unused clocks Sep 4 23:45:24.200129 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 4 23:45:24.200149 kernel: Write protecting the kernel read-only data: 38912k Sep 4 23:45:24.200178 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 23:45:24.200197 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 4 23:45:24.200217 kernel: Run /init as init process Sep 4 23:45:24.200236 kernel: with arguments: Sep 4 23:45:24.200255 kernel: /init Sep 4 23:45:24.200274 kernel: with environment: Sep 4 23:45:24.200293 kernel: HOME=/ Sep 4 23:45:24.200312 kernel: TERM=linux Sep 4 23:45:24.200331 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:45:24.200357 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:45:24.200384 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:45:24.200406 systemd[1]: Detected virtualization google. Sep 4 23:45:24.200426 systemd[1]: Detected architecture x86-64. Sep 4 23:45:24.200445 systemd[1]: Running in initrd. Sep 4 23:45:24.200465 systemd[1]: No hostname configured, using default hostname. Sep 4 23:45:24.200486 systemd[1]: Hostname set to . Sep 4 23:45:24.200510 systemd[1]: Initializing machine ID from random generator. Sep 4 23:45:24.200531 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:45:24.200551 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:45:24.200579 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:45:24.200601 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:45:24.200623 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:45:24.200643 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:45:24.200671 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:45:24.200713 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:45:24.200739 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:45:24.200760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:45:24.200782 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:45:24.200807 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:45:24.200828 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:45:24.200849 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:45:24.200870 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:45:24.200923 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:45:24.200944 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:45:24.200967 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:45:24.200989 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:45:24.201015 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:45:24.201041 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:45:24.201062 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:45:24.201083 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:45:24.201105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:45:24.201126 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:45:24.201147 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:45:24.201168 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:45:24.201190 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:45:24.201212 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:45:24.201237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:24.201322 systemd-journald[184]: Collecting audit messages is disabled. Sep 4 23:45:24.201370 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:45:24.201392 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:45:24.201419 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:45:24.201442 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:45:24.201464 systemd-journald[184]: Journal started Sep 4 23:45:24.201511 systemd-journald[184]: Runtime Journal (/run/log/journal/701a5404eaec4113ab15f9c45eeff77b) is 8M, max 148.6M, 140.6M free. Sep 4 23:45:24.159953 systemd-modules-load[185]: Inserted module 'overlay' Sep 4 23:45:24.210912 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:45:24.218484 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:45:24.223113 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:45:24.226212 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:24.231370 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 4 23:45:24.235151 kernel: Bridge firewalling registered Sep 4 23:45:24.234134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:45:24.245623 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:45:24.250646 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:45:24.263193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:45:24.266954 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:45:24.284170 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:45:24.302130 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:45:24.314302 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:45:24.330442 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:24.339576 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:45:24.355094 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:45:24.378200 systemd-resolved[215]: Positive Trust Anchors: Sep 4 23:45:24.378839 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:45:24.379089 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:45:24.398548 dracut-cmdline[219]: dracut-dracut-053 Sep 4 23:45:24.398548 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:45:24.386174 systemd-resolved[215]: Defaulting to hostname 'linux'. Sep 4 23:45:24.390678 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:45:24.402736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:45:24.496940 kernel: SCSI subsystem initialized Sep 4 23:45:24.508940 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:45:24.521927 kernel: iscsi: registered transport (tcp) Sep 4 23:45:24.548938 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:45:24.549059 kernel: QLogic iSCSI HBA Driver Sep 4 23:45:24.604694 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:45:24.613146 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:45:24.693061 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:45:24.693184 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:45:24.701905 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:45:24.756964 kernel: raid6: avx2x4 gen() 18215 MB/s Sep 4 23:45:24.777974 kernel: raid6: avx2x2 gen() 17983 MB/s Sep 4 23:45:24.803943 kernel: raid6: avx2x1 gen() 13970 MB/s Sep 4 23:45:24.804007 kernel: raid6: using algorithm avx2x4 gen() 18215 MB/s Sep 4 23:45:24.830929 kernel: raid6: .... xor() 7849 MB/s, rmw enabled Sep 4 23:45:24.831010 kernel: raid6: using avx2x2 recovery algorithm Sep 4 23:45:24.859933 kernel: xor: automatically using best checksumming function avx Sep 4 23:45:25.031926 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:45:25.045805 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:45:25.051171 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:45:25.089686 systemd-udevd[401]: Using default interface naming scheme 'v255'. Sep 4 23:45:25.097920 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:45:25.129109 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:45:25.168426 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Sep 4 23:45:25.207110 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:45:25.235161 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:45:25.321169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:45:25.342564 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:45:25.394697 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:45:25.417093 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:45:25.439088 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 23:45:25.445084 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:45:25.492577 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 23:45:25.492620 kernel: AES CTR mode by8 optimization enabled Sep 4 23:45:25.457019 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:45:25.550607 kernel: scsi host0: Virtio SCSI HBA Sep 4 23:45:25.484177 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:45:25.570073 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 4 23:45:25.531264 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:45:25.531475 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:25.604212 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:45:25.707235 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 4 23:45:25.707557 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 4 23:45:25.707814 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 4 23:45:25.708093 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 4 23:45:25.708335 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 4 23:45:25.708590 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:45:25.708619 kernel: GPT:17805311 != 25165823 Sep 4 23:45:25.708651 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:45:25.708675 kernel: GPT:17805311 != 25165823 Sep 4 23:45:25.708697 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:45:25.708720 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:45:25.708744 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 4 23:45:25.630760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:45:25.631020 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:25.631405 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:25.654374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:25.717689 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:45:25.726237 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:45:25.758392 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (449) Sep 4 23:45:25.773909 kernel: BTRFS: device fsid 185ffa67-4184-4488-b7c8-7c0711a63b2d devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (466) Sep 4 23:45:25.802075 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 4 23:45:25.831426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:25.855082 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 4 23:45:25.884585 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 4 23:45:25.884866 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 4 23:45:25.931681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 4 23:45:25.936121 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:45:25.957076 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:45:25.988012 disk-uuid[543]: Primary Header is updated. Sep 4 23:45:25.988012 disk-uuid[543]: Secondary Entries is updated. Sep 4 23:45:25.988012 disk-uuid[543]: Secondary Header is updated. Sep 4 23:45:26.006185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:45:26.051381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:27.035940 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 23:45:27.037025 disk-uuid[544]: The operation has completed successfully. Sep 4 23:45:27.114121 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:45:27.114289 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:45:27.187214 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:45:27.222073 sh[566]: Success Sep 4 23:45:27.248983 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 23:45:27.346182 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:45:27.354304 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:45:27.379015 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:45:27.420857 kernel: BTRFS info (device dm-0): first mount of filesystem 185ffa67-4184-4488-b7c8-7c0711a63b2d Sep 4 23:45:27.421002 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:45:27.421029 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:45:27.430459 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:45:27.437280 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:45:27.471947 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 23:45:27.478057 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:45:27.479241 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:45:27.484178 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:45:27.550930 kernel: BTRFS info (device sda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:45:27.551059 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:45:27.551103 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:45:27.552354 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:45:27.599145 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 23:45:27.599212 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:45:27.599281 kernel: BTRFS info (device sda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:45:27.601306 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:45:27.618349 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:45:27.717146 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:45:27.745211 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:45:27.834530 ignition[668]: Ignition 2.20.0 Sep 4 23:45:27.836730 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:45:27.834551 ignition[668]: Stage: fetch-offline Sep 4 23:45:27.848300 systemd-networkd[746]: lo: Link UP Sep 4 23:45:27.834630 ignition[668]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:27.848310 systemd-networkd[746]: lo: Gained carrier Sep 4 23:45:27.834652 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 23:45:27.851098 systemd-networkd[746]: Enumeration completed Sep 4 23:45:27.834807 ignition[668]: parsed url from cmdline: "" Sep 4 23:45:27.851510 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:27.834828 ignition[668]: no config URL provided Sep 4 23:45:27.851518 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:27.834837 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:45:27.852061 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:45:27.834851 ignition[668]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:45:27.853463 systemd-networkd[746]: eth0: Link UP Sep 4 23:45:27.834862 ignition[668]: failed to fetch config: resource requires networking Sep 4 23:45:27.853471 systemd-networkd[746]: eth0: Gained carrier Sep 4 23:45:27.835200 ignition[668]: Ignition finished successfully Sep 4 23:45:27.853485 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:27.925034 ignition[759]: Ignition 2.20.0 Sep 4 23:45:27.865306 systemd[1]: Reached target network.target - Network. Sep 4 23:45:27.925046 ignition[759]: Stage: fetch Sep 4 23:45:27.867068 systemd-networkd[746]: eth0: Overlong DHCP hostname received, shortened from 'ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699.c.flatcar-212911.internal' to 'ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699' Sep 4 23:45:27.925283 ignition[759]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:27.867097 systemd-networkd[746]: eth0: DHCPv4 address 10.128.0.91/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 4 23:45:27.925295 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 23:45:27.891189 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:45:27.925444 ignition[759]: parsed url from cmdline: "" Sep 4 23:45:27.938858 unknown[759]: fetched base config from "system" Sep 4 23:45:27.925452 ignition[759]: no config URL provided Sep 4 23:45:27.938882 unknown[759]: fetched base config from "system" Sep 4 23:45:27.925463 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:45:27.938917 unknown[759]: fetched user config from "gcp" Sep 4 23:45:27.925476 ignition[759]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:45:27.942515 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:45:27.925508 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 4 23:45:27.974427 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:45:27.929858 ignition[759]: GET result: OK Sep 4 23:45:28.030638 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:45:27.930028 ignition[759]: parsing config with SHA512: a175271c91c9d1d62bbe42f9c2b40ae729af0ef046388103f93ab3714bfc2d408c61450c14b9a2fade1c426c1341e0418fcbd764df3a77bbb6a776849edff742 Sep 4 23:45:28.057495 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:45:27.940479 ignition[759]: fetch: fetch complete Sep 4 23:45:28.090479 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:45:27.940490 ignition[759]: fetch: fetch passed Sep 4 23:45:28.107335 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:45:27.940598 ignition[759]: Ignition finished successfully Sep 4 23:45:28.126069 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:45:28.014804 ignition[765]: Ignition 2.20.0 Sep 4 23:45:28.141095 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:45:28.014816 ignition[765]: Stage: kargs Sep 4 23:45:28.173097 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:45:28.015061 ignition[765]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:28.190107 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:45:28.015074 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 23:45:28.197109 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:45:28.016108 ignition[765]: kargs: kargs passed Sep 4 23:45:28.016198 ignition[765]: Ignition finished successfully Sep 4 23:45:28.075879 ignition[771]: Ignition 2.20.0 Sep 4 23:45:28.075904 ignition[771]: Stage: disks Sep 4 23:45:28.076197 ignition[771]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:28.076216 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 23:45:28.078400 ignition[771]: disks: disks passed Sep 4 23:45:28.078488 ignition[771]: Ignition finished successfully Sep 4 23:45:28.272460 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 4 23:45:28.465062 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:45:28.471078 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:45:28.637939 kernel: EXT4-fs (sda9): mounted filesystem 86dd2c20-900e-43ec-8fda-e9f0f484a013 r/w with ordered data mode. Quota mode: none. Sep 4 23:45:28.638741 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:45:28.639641 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:45:28.670053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:45:28.702077 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:45:28.751259 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (787) Sep 4 23:45:28.751300 kernel: BTRFS info (device sda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:45:28.751331 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:45:28.751349 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:45:28.711643 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 23:45:28.789130 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 23:45:28.789179 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:45:28.711725 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:45:28.711768 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:45:28.776305 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:45:28.797433 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:45:28.833262 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:45:28.980238 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:45:28.991140 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:45:29.001091 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:45:29.011195 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:45:29.157704 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:45:29.166115 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:45:29.198024 kernel: BTRFS info (device sda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:45:29.213284 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:45:29.223634 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:45:29.257954 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:45:29.275194 ignition[899]: INFO : Ignition 2.20.0 Sep 4 23:45:29.275194 ignition[899]: INFO : Stage: mount Sep 4 23:45:29.297125 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:29.297125 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 23:45:29.297125 ignition[899]: INFO : mount: mount passed Sep 4 23:45:29.297125 ignition[899]: INFO : Ignition finished successfully Sep 4 23:45:29.278969 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:45:29.289084 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:45:29.513224 systemd-networkd[746]: eth0: Gained IPv6LL Sep 4 23:45:29.645258 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:45:29.696987 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (911) Sep 4 23:45:29.715194 kernel: BTRFS info (device sda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:45:29.715340 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:45:29.715368 kernel: BTRFS info (device sda6): using free space tree Sep 4 23:45:29.737932 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 4 23:45:29.738066 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 23:45:29.741788 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:45:29.785295 ignition[928]: INFO : Ignition 2.20.0 Sep 4 23:45:29.785295 ignition[928]: INFO : Stage: files Sep 4 23:45:29.801226 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:29.801226 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 23:45:29.801226 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:45:29.801226 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:45:29.801226 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:45:29.859160 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:45:29.859160 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:45:29.859160 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:45:29.859160 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:45:29.859160 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 23:45:29.803615 unknown[928]: wrote ssh authorized keys file for user: core Sep 4 23:45:29.946543 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:45:30.923830 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:45:30.942166 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:45:30.942166 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 23:45:31.150549 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:45:31.354515 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:45:31.354515 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:45:31.386139 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 23:45:31.732507 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:45:32.708935 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:45:32.708935 ignition[928]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:45:32.751241 ignition[928]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:45:32.751241 ignition[928]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:45:32.751241 ignition[928]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:45:32.751241 ignition[928]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:45:32.751241 ignition[928]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:45:32.751241 ignition[928]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:45:32.751241 ignition[928]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:45:32.751241 ignition[928]: INFO : files: files passed Sep 4 23:45:32.751241 ignition[928]: INFO : Ignition finished successfully Sep 4 23:45:32.715160 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:45:32.745254 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:45:32.769271 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:45:32.809963 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:45:32.973143 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:32.973143 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:32.810146 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:45:33.026403 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:32.839730 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:45:32.855366 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:45:32.885226 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:45:32.988446 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:45:32.988606 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:45:33.016318 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:45:33.036219 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:45:33.058390 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:45:33.064287 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:45:33.145274 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:45:33.172282 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:45:33.215077 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:45:33.228444 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:45:33.250455 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:45:33.269398 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:45:33.269657 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:45:33.298540 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:45:33.320401 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:45:33.339401 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:45:33.358413 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:45:33.380591 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:45:33.401481 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:45:33.421385 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:45:33.443496 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:45:33.464432 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:45:33.484524 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:45:33.504374 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:45:33.504662 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:45:33.533659 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:45:33.554526 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:45:33.575469 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:45:33.575670 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:45:33.587557 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:45:33.587794 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:45:33.628595 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:45:33.628936 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:45:33.657679 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:45:33.657973 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:45:33.674334 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:45:33.708377 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:45:33.765150 ignition[981]: INFO : Ignition 2.20.0 Sep 4 23:45:33.765150 ignition[981]: INFO : Stage: umount Sep 4 23:45:33.765150 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:33.765150 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 4 23:45:33.765150 ignition[981]: INFO : umount: umount passed Sep 4 23:45:33.765150 ignition[981]: INFO : Ignition finished successfully Sep 4 23:45:33.721132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:45:33.721521 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:45:33.734448 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:45:33.734708 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:45:33.760378 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:45:33.760563 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:45:33.789328 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:45:33.790523 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:45:33.790666 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:45:33.818407 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:45:33.818656 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:45:33.824436 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:45:33.824540 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:45:33.850389 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:45:33.850490 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:45:33.871370 systemd[1]: Stopped target network.target - Network. Sep 4 23:45:33.891430 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:45:33.891599 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:45:33.920407 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:45:33.938167 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:45:33.943227 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:45:33.960272 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:45:33.978219 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:45:33.994274 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:45:33.994409 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:45:34.013299 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:45:34.013402 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:45:34.032280 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:45:34.032421 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:45:34.052319 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:45:34.052432 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:45:34.073332 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:45:34.073544 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:45:34.092652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:45:34.112439 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:45:34.132363 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:45:34.132676 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:45:34.144319 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:45:34.144823 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:45:34.145011 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:45:34.160678 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:45:34.161357 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:45:34.161503 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:45:34.193576 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:45:34.193689 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:45:34.214134 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:45:34.228158 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:45:34.228346 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:45:34.240273 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:45:34.725198 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 4 23:45:34.240405 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:45:34.250444 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:45:34.250538 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:45:34.273314 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:45:34.273458 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:45:34.293509 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:45:34.313968 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:45:34.314090 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:45:34.314652 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:45:34.314850 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:45:34.341607 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:45:34.341712 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:45:34.349451 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:45:34.349520 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:45:34.376240 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:45:34.376378 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:45:34.405175 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:45:34.405353 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:45:34.435173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:45:34.435433 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:34.475225 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:45:34.497133 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:45:34.497321 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:45:34.517467 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 23:45:34.517561 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:45:34.538306 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:45:34.538420 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:45:34.561268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:45:34.561384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:34.583826 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:45:34.583962 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:45:34.584558 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:45:34.584693 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:45:34.603600 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:45:34.603755 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:45:34.616087 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:45:34.649357 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:45:34.669065 systemd[1]: Switching root. Sep 4 23:45:35.156171 systemd-journald[184]: Journal stopped Sep 4 23:45:37.819187 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:45:37.819242 kernel: SELinux: policy capability open_perms=1 Sep 4 23:45:37.819258 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:45:37.819269 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:45:37.819281 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:45:37.819293 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:45:37.819307 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:45:37.819318 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:45:37.819334 kernel: audit: type=1403 audit(1757029535.440:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:45:37.819351 systemd[1]: Successfully loaded SELinux policy in 86.098ms. Sep 4 23:45:37.819367 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14ms. Sep 4 23:45:37.819383 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:45:37.819406 systemd[1]: Detected virtualization google. Sep 4 23:45:37.819419 systemd[1]: Detected architecture x86-64. Sep 4 23:45:37.819436 systemd[1]: Detected first boot. Sep 4 23:45:37.819450 systemd[1]: Initializing machine ID from random generator. Sep 4 23:45:37.819467 zram_generator::config[1024]: No configuration found. Sep 4 23:45:37.819482 kernel: Guest personality initialized and is inactive Sep 4 23:45:37.819495 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 23:45:37.819510 kernel: Initialized host personality Sep 4 23:45:37.819522 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:45:37.819535 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:45:37.819549 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:45:37.819562 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:45:37.819583 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:45:37.819597 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:45:37.819610 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:45:37.819623 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:45:37.819641 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:45:37.819654 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:45:37.819669 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:45:37.819683 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:45:37.819696 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:45:37.819709 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:45:37.819723 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:45:37.819740 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:45:37.819753 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:45:37.819767 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:45:37.819781 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:45:37.819796 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:45:37.819814 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 23:45:37.819831 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:45:37.819845 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:45:37.819862 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:45:37.819876 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:45:37.820040 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:45:37.820067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:45:37.820088 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:45:37.820108 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:45:37.820128 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:45:37.820149 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:45:37.820179 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:45:37.820200 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:45:37.820220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:45:37.820243 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:45:37.820269 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:45:37.820290 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:45:37.820312 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:45:37.820332 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:45:37.820353 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:45:37.820375 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:45:37.820398 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:45:37.820421 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:45:37.820446 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:45:37.820468 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:45:37.820490 systemd[1]: Reached target machines.target - Containers. Sep 4 23:45:37.820513 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:45:37.820537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:45:37.820559 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:45:37.820595 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:45:37.820617 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:45:37.820638 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:45:37.820669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:45:37.820692 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:45:37.820715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:45:37.820736 kernel: fuse: init (API version 7.39) Sep 4 23:45:37.820757 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:45:37.820779 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:45:37.820801 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:45:37.820827 kernel: ACPI: bus type drm_connector registered Sep 4 23:45:37.820850 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:45:37.820873 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:45:37.820936 kernel: loop: module loaded Sep 4 23:45:37.820961 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:45:37.820981 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:45:37.821001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:45:37.821073 systemd-journald[1112]: Collecting audit messages is disabled. Sep 4 23:45:37.821113 systemd-journald[1112]: Journal started Sep 4 23:45:37.821142 systemd-journald[1112]: Runtime Journal (/run/log/journal/49311ab3089d4c74948a75aa5310ff2c) is 8M, max 148.6M, 140.6M free. Sep 4 23:45:36.555683 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:45:36.568288 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 23:45:36.569054 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:45:37.846928 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:45:37.873932 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:45:37.886931 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:45:37.923015 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:45:37.940929 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:45:37.946929 systemd[1]: Stopped verity-setup.service. Sep 4 23:45:37.971997 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:45:37.985951 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:45:37.996653 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:45:38.006348 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:45:38.017457 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:45:38.027336 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:45:38.037368 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:45:38.047343 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:45:38.057781 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:45:38.069660 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:45:38.082531 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:45:38.082855 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:45:38.094555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:45:38.094861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:45:38.106525 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:45:38.106868 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:45:38.117586 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:45:38.117913 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:45:38.129506 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:45:38.129809 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:45:38.140492 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:45:38.140821 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:45:38.151648 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:45:38.161525 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:45:38.173618 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:45:38.185567 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:45:38.197534 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:45:38.222581 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:45:38.238061 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:45:38.261087 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:45:38.271093 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:45:38.271167 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:45:38.283147 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:45:38.300182 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:45:38.319182 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:45:38.330324 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:45:38.334849 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:45:38.352675 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:45:38.363555 systemd-journald[1112]: Time spent on flushing to /var/log/journal/49311ab3089d4c74948a75aa5310ff2c is 100.437ms for 947 entries. Sep 4 23:45:38.363555 systemd-journald[1112]: System Journal (/var/log/journal/49311ab3089d4c74948a75aa5310ff2c) is 8M, max 584.8M, 576.8M free. Sep 4 23:45:38.505203 systemd-journald[1112]: Received client request to flush runtime journal. Sep 4 23:45:38.371881 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:45:38.381207 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:45:38.392065 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:45:38.400214 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:45:38.419146 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:45:38.440158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:45:38.457125 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:45:38.478785 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:45:38.490350 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:45:38.502570 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:45:38.525413 kernel: loop0: detected capacity change from 0 to 147912 Sep 4 23:45:38.525485 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:45:38.537698 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:45:38.569602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:45:38.582150 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:45:38.584557 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Sep 4 23:45:38.585462 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Sep 4 23:45:38.607355 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:45:38.619955 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:45:38.636444 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:45:38.648287 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:45:38.659018 udevadm[1150]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 23:45:38.692559 kernel: loop1: detected capacity change from 0 to 52152 Sep 4 23:45:38.689843 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:45:38.693421 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:45:38.775794 kernel: loop2: detected capacity change from 0 to 138176 Sep 4 23:45:38.779458 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:45:38.810211 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:45:38.876200 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Sep 4 23:45:38.877474 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Sep 4 23:45:38.896824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:45:38.920964 kernel: loop3: detected capacity change from 0 to 224512 Sep 4 23:45:39.029954 kernel: loop4: detected capacity change from 0 to 147912 Sep 4 23:45:39.089028 kernel: loop5: detected capacity change from 0 to 52152 Sep 4 23:45:39.145970 kernel: loop6: detected capacity change from 0 to 138176 Sep 4 23:45:39.217690 kernel: loop7: detected capacity change from 0 to 224512 Sep 4 23:45:39.258518 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 4 23:45:39.261834 (sd-merge)[1175]: Merged extensions into '/usr'. Sep 4 23:45:39.273704 systemd[1]: Reload requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:45:39.274118 systemd[1]: Reloading... Sep 4 23:45:39.440326 zram_generator::config[1199]: No configuration found. Sep 4 23:45:39.573531 ldconfig[1143]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:45:39.750137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:39.883534 systemd[1]: Reloading finished in 608 ms. Sep 4 23:45:39.902028 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:45:39.912859 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:45:39.937073 systemd[1]: Starting ensure-sysext.service... Sep 4 23:45:39.957470 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:45:39.984582 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:45:40.005674 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:45:40.006307 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:45:40.009242 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:45:40.010110 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Sep 4 23:45:40.010410 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Sep 4 23:45:40.011600 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:45:40.019843 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:45:40.019862 systemd-tmpfiles[1244]: Skipping /boot Sep 4 23:45:40.023776 systemd[1]: Reload requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:45:40.024111 systemd[1]: Reloading... Sep 4 23:45:40.063938 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:45:40.063958 systemd-tmpfiles[1244]: Skipping /boot Sep 4 23:45:40.112216 systemd-udevd[1247]: Using default interface naming scheme 'v255'. Sep 4 23:45:40.194482 zram_generator::config[1277]: No configuration found. Sep 4 23:45:40.486045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:40.513967 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 23:45:40.525924 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 4 23:45:40.564923 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 4 23:45:40.615916 kernel: ACPI: button: Power Button [PWRF] Sep 4 23:45:40.658309 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1331) Sep 4 23:45:40.667242 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 23:45:40.668394 systemd[1]: Reloading finished in 643 ms. Sep 4 23:45:40.681046 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:45:40.710375 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 4 23:45:40.710503 kernel: EDAC MC: Ver: 3.0.0 Sep 4 23:45:40.724198 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:45:40.724926 kernel: ACPI: button: Sleep Button [SLPF] Sep 4 23:45:40.793035 systemd[1]: Finished ensure-sysext.service. Sep 4 23:45:40.807918 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:45:40.882033 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:45:40.900761 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 4 23:45:40.917751 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 4 23:45:40.928141 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:45:40.933137 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:45:40.949446 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:45:40.961405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:45:40.974097 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:45:40.995773 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:45:41.001004 lvm[1355]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:45:41.013431 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:45:41.030759 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:45:41.050222 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:45:41.068870 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 23:45:41.069276 augenrules[1375]: No rules Sep 4 23:45:41.077285 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:45:41.080260 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:45:41.093088 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:45:41.100229 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:45:41.122194 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:45:41.148358 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:45:41.158058 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:45:41.174195 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:45:41.197423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:41.207070 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:45:41.221122 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:45:41.221844 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:45:41.235707 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:45:41.247758 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:45:41.248369 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:45:41.248690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:45:41.249255 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:45:41.249546 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:45:41.250089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:45:41.250359 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:45:41.251099 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:45:41.251426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:45:41.259732 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:45:41.260338 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:45:41.262612 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 23:45:41.280416 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:45:41.281389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:45:41.287227 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:45:41.289838 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 4 23:45:41.291998 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:45:41.292117 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:45:41.294216 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:45:41.300164 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:45:41.300260 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:45:41.309767 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:45:41.363986 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:45:41.366483 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:45:41.387622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:41.399741 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 4 23:45:41.419622 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:45:41.530443 systemd-networkd[1384]: lo: Link UP Sep 4 23:45:41.530457 systemd-networkd[1384]: lo: Gained carrier Sep 4 23:45:41.533405 systemd-networkd[1384]: Enumeration completed Sep 4 23:45:41.533567 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:45:41.534559 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:41.534572 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:41.535237 systemd-networkd[1384]: eth0: Link UP Sep 4 23:45:41.535245 systemd-networkd[1384]: eth0: Gained carrier Sep 4 23:45:41.535286 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:41.544172 systemd-networkd[1384]: eth0: Overlong DHCP hostname received, shortened from 'ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699.c.flatcar-212911.internal' to 'ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699' Sep 4 23:45:41.544205 systemd-networkd[1384]: eth0: DHCPv4 address 10.128.0.91/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 4 23:45:41.548877 systemd-resolved[1385]: Positive Trust Anchors: Sep 4 23:45:41.549167 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:45:41.549240 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:45:41.551251 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:45:41.558629 systemd-resolved[1385]: Defaulting to hostname 'linux'. Sep 4 23:45:41.569155 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:45:41.569470 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:45:41.590402 systemd[1]: Reached target network.target - Network. Sep 4 23:45:41.600038 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:45:41.611109 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:45:41.621285 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:45:41.634123 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:45:41.645277 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:45:41.655285 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:45:41.667058 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:45:41.678087 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:45:41.678144 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:45:41.687051 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:45:41.697139 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:45:41.708913 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:45:41.719726 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:45:41.731350 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:45:41.743121 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:45:41.764960 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:45:41.775756 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:45:41.788491 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:45:41.800405 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:45:41.811041 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:45:41.821080 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:45:41.830195 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:45:41.830257 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:45:41.837045 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:45:41.839066 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:45:41.868199 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:45:41.888617 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:45:41.921196 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:45:41.923034 jq[1439]: false Sep 4 23:45:41.932120 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:45:41.941163 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:45:41.959148 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 23:45:41.974734 coreos-metadata[1437]: Sep 04 23:45:41.974 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 4 23:45:41.977062 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:45:41.981915 coreos-metadata[1437]: Sep 04 23:45:41.980 INFO Fetch successful Sep 4 23:45:41.981915 coreos-metadata[1437]: Sep 04 23:45:41.980 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 4 23:45:41.984236 coreos-metadata[1437]: Sep 04 23:45:41.984 INFO Fetch successful Sep 4 23:45:41.984236 coreos-metadata[1437]: Sep 04 23:45:41.984 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 4 23:45:41.985151 coreos-metadata[1437]: Sep 04 23:45:41.984 INFO Fetch successful Sep 4 23:45:41.985151 coreos-metadata[1437]: Sep 04 23:45:41.985 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 4 23:45:41.989564 coreos-metadata[1437]: Sep 04 23:45:41.988 INFO Fetch successful Sep 4 23:45:41.996159 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:45:42.010634 extend-filesystems[1442]: Found loop4 Sep 4 23:45:42.010634 extend-filesystems[1442]: Found loop5 Sep 4 23:45:42.010634 extend-filesystems[1442]: Found loop6 Sep 4 23:45:42.010634 extend-filesystems[1442]: Found loop7 Sep 4 23:45:42.010634 extend-filesystems[1442]: Found sda Sep 4 23:45:42.010634 extend-filesystems[1442]: Found sda1 Sep 4 23:45:42.010634 extend-filesystems[1442]: Found sda2 Sep 4 23:45:42.099621 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 4 23:45:42.017218 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: ntpd 4.2.8p17@1.4004-o Thu Sep 4 21:32:00 UTC 2025 (1): Starting Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: ---------------------------------------------------- Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: ntp-4 is maintained by Network Time Foundation, Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: corporation. Support and training for ntp-4 are Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: available at https://www.nwtime.org/support Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: ---------------------------------------------------- Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: proto: precision = 0.071 usec (-24) Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: basedate set to 2025-08-23 Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: gps base set to 2025-08-24 (week 2381) Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: Listen normally on 3 eth0 10.128.0.91:123 Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: Listen normally on 4 lo [::1]:123 Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: bind(21) AF_INET6 fe80::4001:aff:fe80:5b%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:5b%2#123 Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: failed to init interface for address fe80::4001:aff:fe80:5b%2 Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: Listening on routing socket on fd #21 for interface updates Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:42.099848 ntpd[1444]: 4 Sep 23:45:42 ntpd[1444]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:42.021496 dbus-daemon[1438]: [system] SELinux support is enabled Sep 4 23:45:42.113974 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 4 23:45:42.140602 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1297) Sep 4 23:45:42.140769 extend-filesystems[1442]: Found sda3 Sep 4 23:45:42.140769 extend-filesystems[1442]: Found usr Sep 4 23:45:42.140769 extend-filesystems[1442]: Found sda4 Sep 4 23:45:42.140769 extend-filesystems[1442]: Found sda6 Sep 4 23:45:42.140769 extend-filesystems[1442]: Found sda7 Sep 4 23:45:42.140769 extend-filesystems[1442]: Found sda9 Sep 4 23:45:42.140769 extend-filesystems[1442]: Checking size of /dev/sda9 Sep 4 23:45:42.140769 extend-filesystems[1442]: Resized partition /dev/sda9 Sep 4 23:45:42.060157 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:45:42.026379 dbus-daemon[1438]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1384 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 23:45:42.236813 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:45:42.236813 extend-filesystems[1463]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 4 23:45:42.236813 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 4 23:45:42.236813 extend-filesystems[1463]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 4 23:45:42.091623 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 4 23:45:42.065085 ntpd[1444]: ntpd 4.2.8p17@1.4004-o Thu Sep 4 21:32:00 UTC 2025 (1): Starting Sep 4 23:45:42.300109 extend-filesystems[1442]: Resized filesystem in /dev/sda9 Sep 4 23:45:42.093706 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:45:42.065118 ntpd[1444]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 23:45:42.312353 update_engine[1465]: I20250904 23:45:42.203581 1465 main.cc:92] Flatcar Update Engine starting Sep 4 23:45:42.312353 update_engine[1465]: I20250904 23:45:42.212132 1465 update_check_scheduler.cc:74] Next update check in 8m6s Sep 4 23:45:42.099145 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:45:42.065135 ntpd[1444]: ---------------------------------------------------- Sep 4 23:45:42.339207 jq[1466]: true Sep 4 23:45:42.134192 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:45:42.065150 ntpd[1444]: ntp-4 is maintained by Network Time Foundation, Sep 4 23:45:42.145649 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:45:42.065163 ntpd[1444]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 23:45:42.173004 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:45:42.065176 ntpd[1444]: corporation. Support and training for ntp-4 are Sep 4 23:45:42.173387 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:45:42.065193 ntpd[1444]: available at https://www.nwtime.org/support Sep 4 23:45:42.360255 jq[1475]: true Sep 4 23:45:42.174228 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:45:42.065207 ntpd[1444]: ---------------------------------------------------- Sep 4 23:45:42.175110 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:45:42.072007 ntpd[1444]: proto: precision = 0.071 usec (-24) Sep 4 23:45:42.191716 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:45:42.073400 ntpd[1444]: basedate set to 2025-08-23 Sep 4 23:45:42.192086 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:45:42.073426 ntpd[1444]: gps base set to 2025-08-24 (week 2381) Sep 4 23:45:42.209576 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:45:42.078388 ntpd[1444]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 23:45:42.209979 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:45:42.078459 ntpd[1444]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 23:45:42.354357 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:45:42.078748 ntpd[1444]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 23:45:42.364586 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:45:42.078809 ntpd[1444]: Listen normally on 3 eth0 10.128.0.91:123 Sep 4 23:45:42.378719 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:45:42.078875 ntpd[1444]: Listen normally on 4 lo [::1]:123 Sep 4 23:45:42.378767 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:45:42.078956 ntpd[1444]: bind(21) AF_INET6 fe80::4001:aff:fe80:5b%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:45:42.078986 ntpd[1444]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:5b%2#123 Sep 4 23:45:42.079007 ntpd[1444]: failed to init interface for address fe80::4001:aff:fe80:5b%2 Sep 4 23:45:42.079050 ntpd[1444]: Listening on routing socket on fd #21 for interface updates Sep 4 23:45:42.081265 ntpd[1444]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:42.081301 ntpd[1444]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:42.304336 dbus-daemon[1438]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 23:45:42.393793 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 23:45:42.402211 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:45:42.402280 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:45:42.420933 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:45:42.432145 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:45:42.459703 systemd-logind[1462]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 23:45:42.459753 systemd-logind[1462]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 4 23:45:42.459786 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 23:45:42.473763 systemd-logind[1462]: New seat seat0. Sep 4 23:45:42.476582 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:45:42.490342 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:45:42.490509 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:45:42.548685 tar[1474]: linux-amd64/LICENSE Sep 4 23:45:42.549175 tar[1474]: linux-amd64/helm Sep 4 23:45:42.578560 bash[1508]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:45:42.580989 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:45:42.604149 dbus-daemon[1438]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 23:45:42.605112 systemd[1]: Starting sshkeys.service... Sep 4 23:45:42.609566 dbus-daemon[1438]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1491 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 23:45:42.612775 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 23:45:42.635743 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 23:45:42.687257 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 23:45:42.707540 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 23:45:42.795662 polkitd[1511]: Started polkitd version 121 Sep 4 23:45:42.829542 polkitd[1511]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 23:45:42.838545 polkitd[1511]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 23:45:42.847070 polkitd[1511]: Finished loading, compiling and executing 2 rules Sep 4 23:45:42.853415 dbus-daemon[1438]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 23:45:42.853683 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 23:45:42.855444 polkitd[1511]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 23:45:42.909477 coreos-metadata[1513]: Sep 04 23:45:42.907 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 4 23:45:42.911925 coreos-metadata[1513]: Sep 04 23:45:42.911 INFO Fetch failed with 404: resource not found Sep 4 23:45:42.911925 coreos-metadata[1513]: Sep 04 23:45:42.911 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 4 23:45:42.914009 coreos-metadata[1513]: Sep 04 23:45:42.913 INFO Fetch successful Sep 4 23:45:42.914009 coreos-metadata[1513]: Sep 04 23:45:42.913 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 4 23:45:42.914009 coreos-metadata[1513]: Sep 04 23:45:42.913 INFO Fetch failed with 404: resource not found Sep 4 23:45:42.916918 coreos-metadata[1513]: Sep 04 23:45:42.914 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 4 23:45:42.920763 coreos-metadata[1513]: Sep 04 23:45:42.917 INFO Fetch failed with 404: resource not found Sep 4 23:45:42.920763 coreos-metadata[1513]: Sep 04 23:45:42.919 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 4 23:45:42.921250 coreos-metadata[1513]: Sep 04 23:45:42.921 INFO Fetch successful Sep 4 23:45:42.928219 unknown[1513]: wrote ssh authorized keys file for user: core Sep 4 23:45:42.943422 systemd-hostnamed[1491]: Hostname set to (transient) Sep 4 23:45:42.946679 systemd-resolved[1385]: System hostname changed to 'ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699'. Sep 4 23:45:43.037859 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:45:43.036692 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 23:45:43.058691 systemd[1]: Finished sshkeys.service. Sep 4 23:45:43.065752 ntpd[1444]: bind(24) AF_INET6 fe80::4001:aff:fe80:5b%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:45:43.067034 ntpd[1444]: 4 Sep 23:45:43 ntpd[1444]: bind(24) AF_INET6 fe80::4001:aff:fe80:5b%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:45:43.067034 ntpd[1444]: 4 Sep 23:45:43 ntpd[1444]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:5b%2#123 Sep 4 23:45:43.067034 ntpd[1444]: 4 Sep 23:45:43 ntpd[1444]: failed to init interface for address fe80::4001:aff:fe80:5b%2 Sep 4 23:45:43.065809 ntpd[1444]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:5b%2#123 Sep 4 23:45:43.065831 ntpd[1444]: failed to init interface for address fe80::4001:aff:fe80:5b%2 Sep 4 23:45:43.213203 containerd[1483]: time="2025-09-04T23:45:43.212815619Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:45:43.266546 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:45:43.321119 containerd[1483]: time="2025-09-04T23:45:43.320758069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:43.327839 containerd[1483]: time="2025-09-04T23:45:43.327768625Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:43.328971 containerd[1483]: time="2025-09-04T23:45:43.328941377Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:45:43.329310 containerd[1483]: time="2025-09-04T23:45:43.329032156Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:45:43.329722 containerd[1483]: time="2025-09-04T23:45:43.329591275Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:45:43.331215 containerd[1483]: time="2025-09-04T23:45:43.329962387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:43.331215 containerd[1483]: time="2025-09-04T23:45:43.330098558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:43.331215 containerd[1483]: time="2025-09-04T23:45:43.330121755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:43.331215 containerd[1483]: time="2025-09-04T23:45:43.330480799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:43.331215 containerd[1483]: time="2025-09-04T23:45:43.330508327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:43.331215 containerd[1483]: time="2025-09-04T23:45:43.330531213Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:43.331215 containerd[1483]: time="2025-09-04T23:45:43.330547722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:43.331215 containerd[1483]: time="2025-09-04T23:45:43.330659946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:43.333002 containerd[1483]: time="2025-09-04T23:45:43.332971384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:43.334728 containerd[1483]: time="2025-09-04T23:45:43.334411718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:43.334728 containerd[1483]: time="2025-09-04T23:45:43.334468212Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:45:43.334728 containerd[1483]: time="2025-09-04T23:45:43.334663707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:45:43.334994 containerd[1483]: time="2025-09-04T23:45:43.334973039Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:45:43.341177 containerd[1483]: time="2025-09-04T23:45:43.341108775Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:45:43.341376 containerd[1483]: time="2025-09-04T23:45:43.341292258Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:45:43.342158 containerd[1483]: time="2025-09-04T23:45:43.341323910Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:45:43.342158 containerd[1483]: time="2025-09-04T23:45:43.341597577Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:45:43.342158 containerd[1483]: time="2025-09-04T23:45:43.341626798Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:45:43.343921 containerd[1483]: time="2025-09-04T23:45:43.341876888Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:45:43.343921 containerd[1483]: time="2025-09-04T23:45:43.343542120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:45:43.343921 containerd[1483]: time="2025-09-04T23:45:43.343706114Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:45:43.343921 containerd[1483]: time="2025-09-04T23:45:43.343731925Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:45:43.343921 containerd[1483]: time="2025-09-04T23:45:43.343757587Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:45:43.343921 containerd[1483]: time="2025-09-04T23:45:43.343780543Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:45:43.343921 containerd[1483]: time="2025-09-04T23:45:43.343803718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:45:43.343921 containerd[1483]: time="2025-09-04T23:45:43.343823787Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:45:43.343921 containerd[1483]: time="2025-09-04T23:45:43.343845934Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:45:43.343921 containerd[1483]: time="2025-09-04T23:45:43.343869164Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345070028Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345110938Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345132222Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345166209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345190321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345213418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345236498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345256750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345278868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345298264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345321302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345343286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345370148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.345915 containerd[1483]: time="2025-09-04T23:45:43.345396722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.346523 containerd[1483]: time="2025-09-04T23:45:43.345416582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.346523 containerd[1483]: time="2025-09-04T23:45:43.345448156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.346523 containerd[1483]: time="2025-09-04T23:45:43.345473058Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:45:43.346523 containerd[1483]: time="2025-09-04T23:45:43.345505892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.346523 containerd[1483]: time="2025-09-04T23:45:43.345527949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.346523 containerd[1483]: time="2025-09-04T23:45:43.345546071Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:45:43.348097 containerd[1483]: time="2025-09-04T23:45:43.347718873Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:45:43.348097 containerd[1483]: time="2025-09-04T23:45:43.347943108Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:45:43.348097 containerd[1483]: time="2025-09-04T23:45:43.347970174Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:45:43.348097 containerd[1483]: time="2025-09-04T23:45:43.347993022Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:45:43.348097 containerd[1483]: time="2025-09-04T23:45:43.348007982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.348097 containerd[1483]: time="2025-09-04T23:45:43.348037768Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:45:43.348097 containerd[1483]: time="2025-09-04T23:45:43.348057214Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:45:43.348097 containerd[1483]: time="2025-09-04T23:45:43.348077369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:45:43.349043 containerd[1483]: time="2025-09-04T23:45:43.348549433Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:45:43.349043 containerd[1483]: time="2025-09-04T23:45:43.348637367Z" level=info msg="Connect containerd service" Sep 4 23:45:43.349043 containerd[1483]: time="2025-09-04T23:45:43.348688355Z" level=info msg="using legacy CRI server" Sep 4 23:45:43.349043 containerd[1483]: time="2025-09-04T23:45:43.348700584Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:45:43.349043 containerd[1483]: time="2025-09-04T23:45:43.348877294Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:45:43.358911 containerd[1483]: time="2025-09-04T23:45:43.355021783Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:45:43.358911 containerd[1483]: time="2025-09-04T23:45:43.355480644Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:45:43.358911 containerd[1483]: time="2025-09-04T23:45:43.355557840Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:45:43.358911 containerd[1483]: time="2025-09-04T23:45:43.355613786Z" level=info msg="Start subscribing containerd event" Sep 4 23:45:43.358911 containerd[1483]: time="2025-09-04T23:45:43.355671612Z" level=info msg="Start recovering state" Sep 4 23:45:43.358911 containerd[1483]: time="2025-09-04T23:45:43.355761096Z" level=info msg="Start event monitor" Sep 4 23:45:43.358911 containerd[1483]: time="2025-09-04T23:45:43.355782733Z" level=info msg="Start snapshots syncer" Sep 4 23:45:43.358911 containerd[1483]: time="2025-09-04T23:45:43.355797214Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:45:43.358911 containerd[1483]: time="2025-09-04T23:45:43.355811773Z" level=info msg="Start streaming server" Sep 4 23:45:43.358911 containerd[1483]: time="2025-09-04T23:45:43.357802775Z" level=info msg="containerd successfully booted in 0.147517s" Sep 4 23:45:43.356075 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:45:43.401108 systemd-networkd[1384]: eth0: Gained IPv6LL Sep 4 23:45:43.406929 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:45:43.418953 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:45:43.440185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:43.457335 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:45:43.476321 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 4 23:45:43.511489 init.sh[1543]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 4 23:45:43.515045 init.sh[1543]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 4 23:45:43.515045 init.sh[1543]: + /usr/bin/google_instance_setup Sep 4 23:45:43.545759 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:45:43.761943 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:45:43.848928 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:45:43.869651 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:45:43.886424 systemd[1]: Started sshd@0-10.128.0.91:22-139.178.68.195:51770.service - OpenSSH per-connection server daemon (139.178.68.195:51770). Sep 4 23:45:43.918591 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:45:43.919515 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:45:43.940385 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:45:44.026075 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:45:44.033042 tar[1474]: linux-amd64/README.md Sep 4 23:45:44.052101 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:45:44.073061 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 23:45:44.084669 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:45:44.095261 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:45:44.386965 sshd[1563]: Accepted publickey for core from 139.178.68.195 port 51770 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:45:44.392432 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:44.409193 instance-setup[1549]: INFO Running google_set_multiqueue. Sep 4 23:45:44.414688 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:45:44.435512 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:45:44.435754 instance-setup[1549]: INFO Set channels for eth0 to 2. Sep 4 23:45:44.443017 instance-setup[1549]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 4 23:45:44.444925 instance-setup[1549]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 4 23:45:44.445540 instance-setup[1549]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 4 23:45:44.448575 instance-setup[1549]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 4 23:45:44.451285 instance-setup[1549]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 4 23:45:44.453704 instance-setup[1549]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 4 23:45:44.456585 instance-setup[1549]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 4 23:45:44.465237 instance-setup[1549]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 4 23:45:44.472175 systemd-logind[1462]: New session 1 of user core. Sep 4 23:45:44.475499 instance-setup[1549]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 4 23:45:44.490218 instance-setup[1549]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 4 23:45:44.493759 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:45:44.493770 instance-setup[1549]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 4 23:45:44.493821 instance-setup[1549]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 4 23:45:44.519348 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:45:44.532251 init.sh[1543]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 4 23:45:44.557958 (systemd)[1607]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:45:44.565429 systemd-logind[1462]: New session c1 of user core. Sep 4 23:45:44.803186 startup-script[1608]: INFO Starting startup scripts. Sep 4 23:45:44.811913 startup-script[1608]: INFO No startup scripts found in metadata. Sep 4 23:45:44.812007 startup-script[1608]: INFO Finished running startup scripts. Sep 4 23:45:44.868653 init.sh[1543]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 4 23:45:44.869708 init.sh[1543]: + daemon_pids=() Sep 4 23:45:44.871134 init.sh[1543]: + for d in accounts clock_skew network Sep 4 23:45:44.871134 init.sh[1543]: + daemon_pids+=($!) Sep 4 23:45:44.871134 init.sh[1543]: + for d in accounts clock_skew network Sep 4 23:45:44.871134 init.sh[1543]: + daemon_pids+=($!) Sep 4 23:45:44.871134 init.sh[1543]: + for d in accounts clock_skew network Sep 4 23:45:44.871134 init.sh[1543]: + daemon_pids+=($!) Sep 4 23:45:44.871134 init.sh[1543]: + NOTIFY_SOCKET=/run/systemd/notify Sep 4 23:45:44.871134 init.sh[1543]: + /usr/bin/systemd-notify --ready Sep 4 23:45:44.873501 init.sh[1616]: + /usr/bin/google_accounts_daemon Sep 4 23:45:44.877115 init.sh[1617]: + /usr/bin/google_clock_skew_daemon Sep 4 23:45:44.877521 init.sh[1618]: + /usr/bin/google_network_daemon Sep 4 23:45:44.901748 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 4 23:45:44.907143 systemd[1607]: Queued start job for default target default.target. Sep 4 23:45:44.914660 systemd[1607]: Created slice app.slice - User Application Slice. Sep 4 23:45:44.914709 systemd[1607]: Reached target paths.target - Paths. Sep 4 23:45:44.916800 systemd[1607]: Reached target timers.target - Timers. Sep 4 23:45:44.922544 init.sh[1543]: + wait -n 1616 1617 1618 Sep 4 23:45:44.927233 systemd[1607]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:45:44.960201 systemd[1607]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:45:44.961675 systemd[1607]: Reached target sockets.target - Sockets. Sep 4 23:45:44.961881 systemd[1607]: Reached target basic.target - Basic System. Sep 4 23:45:44.961991 systemd[1607]: Reached target default.target - Main User Target. Sep 4 23:45:44.962045 systemd[1607]: Startup finished in 376ms. Sep 4 23:45:44.962173 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:45:44.978328 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:45:45.262190 systemd[1]: Started sshd@1-10.128.0.91:22-139.178.68.195:54668.service - OpenSSH per-connection server daemon (139.178.68.195:54668). Sep 4 23:45:45.420034 google-networking[1618]: INFO Starting Google Networking daemon. Sep 4 23:45:45.459734 google-clock-skew[1617]: INFO Starting Google Clock Skew daemon. Sep 4 23:45:45.466469 google-clock-skew[1617]: INFO Clock drift token has changed: 0. Sep 4 23:45:45.493875 groupadd[1633]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 4 23:45:45.497947 groupadd[1633]: group added to /etc/gshadow: name=google-sudoers Sep 4 23:45:45.566879 groupadd[1633]: new group: name=google-sudoers, GID=1000 Sep 4 23:45:45.605318 google-accounts[1616]: INFO Starting Google Accounts daemon. Sep 4 23:45:45.634301 google-accounts[1616]: WARNING OS Login not installed. Sep 4 23:45:45.637193 google-accounts[1616]: INFO Creating a new user account for 0. Sep 4 23:45:45.639955 sshd[1630]: Accepted publickey for core from 139.178.68.195 port 54668 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:45:45.641207 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:45.648168 init.sh[1647]: useradd: invalid user name '0': use --badname to ignore Sep 4 23:45:45.648863 google-accounts[1616]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 4 23:45:45.663170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:45.668491 systemd-logind[1462]: New session 2 of user core. Sep 4 23:45:45.676186 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:45:45.677231 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:45.686344 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:45:45.696331 systemd[1]: Startup finished in 1.235s (kernel) + 11.642s (initrd) + 10.339s (userspace) = 23.217s. Sep 4 23:45:45.883973 sshd[1651]: Connection closed by 139.178.68.195 port 54668 Sep 4 23:45:45.886237 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:45.891662 systemd[1]: sshd@1-10.128.0.91:22-139.178.68.195:54668.service: Deactivated successfully. Sep 4 23:45:45.895001 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:45:45.897798 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:45:45.900103 systemd-logind[1462]: Removed session 2. Sep 4 23:45:45.944197 systemd[1]: Started sshd@2-10.128.0.91:22-139.178.68.195:54684.service - OpenSSH per-connection server daemon (139.178.68.195:54684). Sep 4 23:45:46.000205 systemd-resolved[1385]: Clock change detected. Flushing caches. Sep 4 23:45:46.001986 google-clock-skew[1617]: INFO Synced system time with hardware clock. Sep 4 23:45:46.016764 ntpd[1444]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:5b%2]:123 Sep 4 23:45:46.017365 ntpd[1444]: 4 Sep 23:45:46 ntpd[1444]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:5b%2]:123 Sep 4 23:45:46.205747 sshd[1665]: Accepted publickey for core from 139.178.68.195 port 54684 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:45:46.208638 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:46.218943 systemd-logind[1462]: New session 3 of user core. Sep 4 23:45:46.222253 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:45:46.370558 kubelet[1650]: E0904 23:45:46.370471 1650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:46.375718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:46.376598 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:46.377395 systemd[1]: kubelet.service: Consumed 1.282s CPU time, 264.2M memory peak. Sep 4 23:45:46.419228 sshd[1667]: Connection closed by 139.178.68.195 port 54684 Sep 4 23:45:46.420192 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:46.426165 systemd[1]: sshd@2-10.128.0.91:22-139.178.68.195:54684.service: Deactivated successfully. Sep 4 23:45:46.428486 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:45:46.429486 systemd-logind[1462]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:45:46.431323 systemd-logind[1462]: Removed session 3. Sep 4 23:45:46.478399 systemd[1]: Started sshd@3-10.128.0.91:22-139.178.68.195:54690.service - OpenSSH per-connection server daemon (139.178.68.195:54690). Sep 4 23:45:46.776361 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 54690 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:45:46.778271 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:46.784600 systemd-logind[1462]: New session 4 of user core. Sep 4 23:45:46.792248 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:45:46.995329 sshd[1676]: Connection closed by 139.178.68.195 port 54690 Sep 4 23:45:46.996277 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:47.001391 systemd[1]: sshd@3-10.128.0.91:22-139.178.68.195:54690.service: Deactivated successfully. Sep 4 23:45:47.004084 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:45:47.006304 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:45:47.007873 systemd-logind[1462]: Removed session 4. Sep 4 23:45:47.053398 systemd[1]: Started sshd@4-10.128.0.91:22-139.178.68.195:54698.service - OpenSSH per-connection server daemon (139.178.68.195:54698). Sep 4 23:45:47.349061 sshd[1682]: Accepted publickey for core from 139.178.68.195 port 54698 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:45:47.351231 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:47.359026 systemd-logind[1462]: New session 5 of user core. Sep 4 23:45:47.368292 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:45:47.556457 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:45:47.557083 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:47.578474 sudo[1685]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:47.622497 sshd[1684]: Connection closed by 139.178.68.195 port 54698 Sep 4 23:45:47.624278 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:47.630614 systemd[1]: sshd@4-10.128.0.91:22-139.178.68.195:54698.service: Deactivated successfully. Sep 4 23:45:47.634400 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:45:47.637302 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:45:47.639418 systemd-logind[1462]: Removed session 5. Sep 4 23:45:47.687112 systemd[1]: Started sshd@5-10.128.0.91:22-139.178.68.195:54700.service - OpenSSH per-connection server daemon (139.178.68.195:54700). Sep 4 23:45:48.006159 sshd[1691]: Accepted publickey for core from 139.178.68.195 port 54700 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:45:48.008381 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:48.015328 systemd-logind[1462]: New session 6 of user core. Sep 4 23:45:48.034384 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:45:48.191520 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:45:48.192115 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:48.198948 sudo[1695]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:48.216101 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:45:48.216647 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:48.234558 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:45:48.277194 augenrules[1717]: No rules Sep 4 23:45:48.278626 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:45:48.279001 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:45:48.280678 sudo[1694]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:48.324689 sshd[1693]: Connection closed by 139.178.68.195 port 54700 Sep 4 23:45:48.325621 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:48.331292 systemd[1]: sshd@5-10.128.0.91:22-139.178.68.195:54700.service: Deactivated successfully. Sep 4 23:45:48.333621 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:45:48.334683 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:45:48.336462 systemd-logind[1462]: Removed session 6. Sep 4 23:45:48.383341 systemd[1]: Started sshd@6-10.128.0.91:22-139.178.68.195:54710.service - OpenSSH per-connection server daemon (139.178.68.195:54710). Sep 4 23:45:48.677700 sshd[1726]: Accepted publickey for core from 139.178.68.195 port 54710 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:45:48.679828 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:48.686397 systemd-logind[1462]: New session 7 of user core. Sep 4 23:45:48.694254 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:45:48.859081 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:45:48.859602 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:49.337406 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:45:49.340043 (dockerd)[1745]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:45:49.787234 dockerd[1745]: time="2025-09-04T23:45:49.787034471Z" level=info msg="Starting up" Sep 4 23:45:49.912597 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3119591076-merged.mount: Deactivated successfully. Sep 4 23:45:50.011789 dockerd[1745]: time="2025-09-04T23:45:50.011718341Z" level=info msg="Loading containers: start." Sep 4 23:45:50.238953 kernel: Initializing XFRM netlink socket Sep 4 23:45:50.351478 systemd-networkd[1384]: docker0: Link UP Sep 4 23:45:50.387177 dockerd[1745]: time="2025-09-04T23:45:50.387107788Z" level=info msg="Loading containers: done." Sep 4 23:45:50.412015 dockerd[1745]: time="2025-09-04T23:45:50.411253580Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:45:50.412015 dockerd[1745]: time="2025-09-04T23:45:50.411399762Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:45:50.412015 dockerd[1745]: time="2025-09-04T23:45:50.411564192Z" level=info msg="Daemon has completed initialization" Sep 4 23:45:50.411938 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2803714215-merged.mount: Deactivated successfully. Sep 4 23:45:50.453585 dockerd[1745]: time="2025-09-04T23:45:50.453004375Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:45:50.453723 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:45:51.361342 containerd[1483]: time="2025-09-04T23:45:51.361260217Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 23:45:51.919665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907460295.mount: Deactivated successfully. Sep 4 23:45:53.820570 containerd[1483]: time="2025-09-04T23:45:53.820445912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:53.822269 containerd[1483]: time="2025-09-04T23:45:53.822198653Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28807315" Sep 4 23:45:53.823805 containerd[1483]: time="2025-09-04T23:45:53.823234231Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:53.826708 containerd[1483]: time="2025-09-04T23:45:53.826668130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:53.828406 containerd[1483]: time="2025-09-04T23:45:53.828358633Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.467004604s" Sep 4 23:45:53.828578 containerd[1483]: time="2025-09-04T23:45:53.828548621Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 23:45:53.829461 containerd[1483]: time="2025-09-04T23:45:53.829429819Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 23:45:55.539854 containerd[1483]: time="2025-09-04T23:45:55.539763267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:55.541663 containerd[1483]: time="2025-09-04T23:45:55.541579940Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24786062" Sep 4 23:45:55.543198 containerd[1483]: time="2025-09-04T23:45:55.542556095Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:55.546648 containerd[1483]: time="2025-09-04T23:45:55.546590832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:55.548499 containerd[1483]: time="2025-09-04T23:45:55.548440735Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.71896408s" Sep 4 23:45:55.548647 containerd[1483]: time="2025-09-04T23:45:55.548504590Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 23:45:55.549478 containerd[1483]: time="2025-09-04T23:45:55.549440058Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 23:45:56.560752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:45:56.569893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:56.934308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:56.939418 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:57.070181 kubelet[2003]: E0904 23:45:57.070044 2003 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:57.078672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:57.078969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:57.079872 systemd[1]: kubelet.service: Consumed 250ms CPU time, 110.5M memory peak. Sep 4 23:45:57.179626 containerd[1483]: time="2025-09-04T23:45:57.179527741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:57.181337 containerd[1483]: time="2025-09-04T23:45:57.181260618Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19176952" Sep 4 23:45:57.182352 containerd[1483]: time="2025-09-04T23:45:57.182279442Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:57.186291 containerd[1483]: time="2025-09-04T23:45:57.186219441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:57.188122 containerd[1483]: time="2025-09-04T23:45:57.187743744Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.638253822s" Sep 4 23:45:57.188122 containerd[1483]: time="2025-09-04T23:45:57.187803018Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 23:45:57.189124 containerd[1483]: time="2025-09-04T23:45:57.189075153Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 23:45:58.361310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031327545.mount: Deactivated successfully. Sep 4 23:45:59.126506 containerd[1483]: time="2025-09-04T23:45:59.126425685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:59.128236 containerd[1483]: time="2025-09-04T23:45:59.127883888Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30899065" Sep 4 23:45:59.130956 containerd[1483]: time="2025-09-04T23:45:59.129668379Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:59.133386 containerd[1483]: time="2025-09-04T23:45:59.133320126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:59.134584 containerd[1483]: time="2025-09-04T23:45:59.134533015Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.945303401s" Sep 4 23:45:59.134801 containerd[1483]: time="2025-09-04T23:45:59.134768704Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 23:45:59.135565 containerd[1483]: time="2025-09-04T23:45:59.135502056Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:45:59.647128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189002695.mount: Deactivated successfully. Sep 4 23:46:01.098801 containerd[1483]: time="2025-09-04T23:46:01.098710213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:01.100547 containerd[1483]: time="2025-09-04T23:46:01.100479757Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Sep 4 23:46:01.102714 containerd[1483]: time="2025-09-04T23:46:01.102121363Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:01.106004 containerd[1483]: time="2025-09-04T23:46:01.105952133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:01.107555 containerd[1483]: time="2025-09-04T23:46:01.107506213Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.971956483s" Sep 4 23:46:01.107736 containerd[1483]: time="2025-09-04T23:46:01.107710984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 23:46:01.108480 containerd[1483]: time="2025-09-04T23:46:01.108450242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:46:01.576296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount308473826.mount: Deactivated successfully. Sep 4 23:46:01.588283 containerd[1483]: time="2025-09-04T23:46:01.588189390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:01.589806 containerd[1483]: time="2025-09-04T23:46:01.589720081Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 4 23:46:01.592955 containerd[1483]: time="2025-09-04T23:46:01.591223110Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:01.595276 containerd[1483]: time="2025-09-04T23:46:01.595215889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:01.596577 containerd[1483]: time="2025-09-04T23:46:01.596519279Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 487.905326ms" Sep 4 23:46:01.596753 containerd[1483]: time="2025-09-04T23:46:01.596581781Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 23:46:01.597610 containerd[1483]: time="2025-09-04T23:46:01.597565966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 23:46:02.123683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608757997.mount: Deactivated successfully. Sep 4 23:46:05.323381 containerd[1483]: time="2025-09-04T23:46:05.323294807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:05.325833 containerd[1483]: time="2025-09-04T23:46:05.325732090Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57689565" Sep 4 23:46:05.326837 containerd[1483]: time="2025-09-04T23:46:05.326800592Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:05.332349 containerd[1483]: time="2025-09-04T23:46:05.332296240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:05.334096 containerd[1483]: time="2025-09-04T23:46:05.333560992Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.735941272s" Sep 4 23:46:05.334096 containerd[1483]: time="2025-09-04T23:46:05.333617769Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 23:46:07.310463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:46:07.323341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:07.689280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:07.697875 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:46:07.788006 kubelet[2159]: E0904 23:46:07.787907 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:46:07.793015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:46:07.793485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:46:07.794253 systemd[1]: kubelet.service: Consumed 273ms CPU time, 110.8M memory peak. Sep 4 23:46:09.643411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:09.643734 systemd[1]: kubelet.service: Consumed 273ms CPU time, 110.8M memory peak. Sep 4 23:46:09.658648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:09.708991 systemd[1]: Reload requested from client PID 2173 ('systemctl') (unit session-7.scope)... Sep 4 23:46:09.709257 systemd[1]: Reloading... Sep 4 23:46:09.938972 zram_generator::config[2219]: No configuration found. Sep 4 23:46:10.105541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:46:10.251955 systemd[1]: Reloading finished in 541 ms. Sep 4 23:46:10.333654 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:46:10.337556 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:10.338592 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:46:10.338946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:10.339031 systemd[1]: kubelet.service: Consumed 177ms CPU time, 99.3M memory peak. Sep 4 23:46:10.349420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:10.680569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:10.694645 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:46:10.753566 kubelet[2274]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:10.753999 kubelet[2274]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:46:10.753999 kubelet[2274]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:10.754231 kubelet[2274]: I0904 23:46:10.754160 2274 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:46:11.645473 kubelet[2274]: I0904 23:46:11.645412 2274 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:46:11.645473 kubelet[2274]: I0904 23:46:11.645468 2274 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:46:11.646948 kubelet[2274]: I0904 23:46:11.646206 2274 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:46:11.692216 kubelet[2274]: E0904 23:46:11.692155 2274 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:11.693011 kubelet[2274]: I0904 23:46:11.692977 2274 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:46:11.705881 kubelet[2274]: E0904 23:46:11.705810 2274 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:46:11.705881 kubelet[2274]: I0904 23:46:11.705863 2274 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:46:11.712170 kubelet[2274]: I0904 23:46:11.712125 2274 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:46:11.712548 kubelet[2274]: I0904 23:46:11.712478 2274 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:46:11.712796 kubelet[2274]: I0904 23:46:11.712526 2274 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:46:11.712796 kubelet[2274]: I0904 23:46:11.712793 2274 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:46:11.713050 kubelet[2274]: I0904 23:46:11.712813 2274 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:46:11.713050 kubelet[2274]: I0904 23:46:11.713026 2274 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:11.718945 kubelet[2274]: I0904 23:46:11.718709 2274 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:46:11.718945 kubelet[2274]: I0904 23:46:11.718791 2274 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:46:11.718945 kubelet[2274]: I0904 23:46:11.718827 2274 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:46:11.718945 kubelet[2274]: I0904 23:46:11.718848 2274 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:46:11.726476 kubelet[2274]: W0904 23:46:11.726284 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699&limit=500&resourceVersion=0": dial tcp 10.128.0.91:6443: connect: connection refused Sep 4 23:46:11.726476 kubelet[2274]: E0904 23:46:11.726405 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699&limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:11.726834 kubelet[2274]: I0904 23:46:11.726785 2274 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:46:11.728307 kubelet[2274]: I0904 23:46:11.727480 2274 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:46:11.728501 kubelet[2274]: W0904 23:46:11.728454 2274 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:46:11.733209 kubelet[2274]: I0904 23:46:11.733175 2274 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:46:11.733334 kubelet[2274]: I0904 23:46:11.733250 2274 server.go:1287] "Started kubelet" Sep 4 23:46:11.737558 kubelet[2274]: I0904 23:46:11.737507 2274 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:46:11.744957 kubelet[2274]: I0904 23:46:11.744814 2274 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:46:11.746965 kubelet[2274]: I0904 23:46:11.746649 2274 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:46:11.757197 kubelet[2274]: I0904 23:46:11.757153 2274 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:46:11.761467 kubelet[2274]: I0904 23:46:11.759736 2274 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:46:11.761467 kubelet[2274]: E0904 23:46:11.760121 2274 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" Sep 4 23:46:11.761467 kubelet[2274]: I0904 23:46:11.760876 2274 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:46:11.761467 kubelet[2274]: I0904 23:46:11.760981 2274 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:46:11.761811 kubelet[2274]: W0904 23:46:11.761475 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.91:6443: connect: connection refused Sep 4 23:46:11.761811 kubelet[2274]: E0904 23:46:11.761545 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:11.764313 kubelet[2274]: I0904 23:46:11.764216 2274 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:46:11.764591 kubelet[2274]: I0904 23:46:11.764550 2274 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:46:11.768077 kubelet[2274]: E0904 23:46:11.765160 2274 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699.186239161b03b2a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,UID:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,},FirstTimestamp:2025-09-04 23:46:11.733205671 +0000 UTC m=+1.032482157,LastTimestamp:2025-09-04 23:46:11.733205671 +0000 UTC m=+1.032482157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,}" Sep 4 23:46:11.769782 kubelet[2274]: W0904 23:46:11.769704 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.91:6443: connect: connection refused Sep 4 23:46:11.769947 kubelet[2274]: E0904 23:46:11.769789 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:11.770816 kubelet[2274]: E0904 23:46:11.770769 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699?timeout=10s\": dial tcp 10.128.0.91:6443: connect: connection refused" interval="200ms" Sep 4 23:46:11.772698 kubelet[2274]: I0904 23:46:11.772666 2274 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:46:11.772698 kubelet[2274]: I0904 23:46:11.772698 2274 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:46:11.772833 kubelet[2274]: I0904 23:46:11.772816 2274 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:46:11.782894 kubelet[2274]: I0904 23:46:11.782808 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:46:11.785639 kubelet[2274]: I0904 23:46:11.785603 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:46:11.785873 kubelet[2274]: I0904 23:46:11.785820 2274 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:46:11.786003 kubelet[2274]: I0904 23:46:11.785873 2274 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:46:11.786003 kubelet[2274]: I0904 23:46:11.785889 2274 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:46:11.786003 kubelet[2274]: E0904 23:46:11.785988 2274 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:46:11.796728 kubelet[2274]: W0904 23:46:11.796398 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.91:6443: connect: connection refused Sep 4 23:46:11.796728 kubelet[2274]: E0904 23:46:11.796518 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:11.802090 kubelet[2274]: E0904 23:46:11.802047 2274 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:46:11.813649 kubelet[2274]: I0904 23:46:11.813606 2274 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:46:11.813649 kubelet[2274]: I0904 23:46:11.813632 2274 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:46:11.813649 kubelet[2274]: I0904 23:46:11.813661 2274 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:11.816492 kubelet[2274]: I0904 23:46:11.816431 2274 policy_none.go:49] "None policy: Start" Sep 4 23:46:11.816492 kubelet[2274]: I0904 23:46:11.816462 2274 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:46:11.816492 kubelet[2274]: I0904 23:46:11.816486 2274 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:46:11.826786 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:46:11.838025 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:46:11.843302 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:46:11.859381 kubelet[2274]: I0904 23:46:11.859344 2274 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:46:11.860329 kubelet[2274]: E0904 23:46:11.860261 2274 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" Sep 4 23:46:11.860601 kubelet[2274]: I0904 23:46:11.860579 2274 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:46:11.860746 kubelet[2274]: I0904 23:46:11.860698 2274 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:46:11.861377 kubelet[2274]: I0904 23:46:11.861350 2274 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:46:11.864365 kubelet[2274]: E0904 23:46:11.864338 2274 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:46:11.864542 kubelet[2274]: E0904 23:46:11.864523 2274 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" Sep 4 23:46:11.907886 systemd[1]: Created slice kubepods-burstable-podac87db985c6fe140c1ea6b7a6ba3c732.slice - libcontainer container kubepods-burstable-podac87db985c6fe140c1ea6b7a6ba3c732.slice. Sep 4 23:46:11.920498 kubelet[2274]: E0904 23:46:11.920162 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:11.925801 systemd[1]: Created slice kubepods-burstable-pod224fc0bd98074352a7613ab2fbe8519f.slice - libcontainer container kubepods-burstable-pod224fc0bd98074352a7613ab2fbe8519f.slice. Sep 4 23:46:11.936282 kubelet[2274]: E0904 23:46:11.935906 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:11.940095 systemd[1]: Created slice kubepods-burstable-pod89c70a3f244fb6f68946d4a02ea53fe8.slice - libcontainer container kubepods-burstable-pod89c70a3f244fb6f68946d4a02ea53fe8.slice. Sep 4 23:46:11.943241 kubelet[2274]: E0904 23:46:11.943200 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:11.968013 kubelet[2274]: I0904 23:46:11.967879 2274 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:11.968516 kubelet[2274]: E0904 23:46:11.968459 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.91:6443/api/v1/nodes\": dial tcp 10.128.0.91:6443: connect: connection refused" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:11.972079 kubelet[2274]: E0904 23:46:11.972020 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699?timeout=10s\": dial tcp 10.128.0.91:6443: connect: connection refused" interval="400ms" Sep 4 23:46:12.062458 kubelet[2274]: I0904 23:46:12.062377 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/224fc0bd98074352a7613ab2fbe8519f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"224fc0bd98074352a7613ab2fbe8519f\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.062458 kubelet[2274]: I0904 23:46:12.062455 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89c70a3f244fb6f68946d4a02ea53fe8-kubeconfig\") pod \"kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"89c70a3f244fb6f68946d4a02ea53fe8\") " pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.062728 kubelet[2274]: I0904 23:46:12.062488 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac87db985c6fe140c1ea6b7a6ba3c732-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"ac87db985c6fe140c1ea6b7a6ba3c732\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.062728 kubelet[2274]: I0904 23:46:12.062515 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/224fc0bd98074352a7613ab2fbe8519f-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"224fc0bd98074352a7613ab2fbe8519f\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.062728 kubelet[2274]: I0904 23:46:12.062544 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/224fc0bd98074352a7613ab2fbe8519f-ca-certs\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"224fc0bd98074352a7613ab2fbe8519f\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.062728 kubelet[2274]: I0904 23:46:12.062574 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/224fc0bd98074352a7613ab2fbe8519f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"224fc0bd98074352a7613ab2fbe8519f\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.062894 kubelet[2274]: I0904 23:46:12.062601 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/224fc0bd98074352a7613ab2fbe8519f-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"224fc0bd98074352a7613ab2fbe8519f\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.062894 kubelet[2274]: I0904 23:46:12.062625 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac87db985c6fe140c1ea6b7a6ba3c732-ca-certs\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"ac87db985c6fe140c1ea6b7a6ba3c732\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.062894 kubelet[2274]: I0904 23:46:12.062652 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac87db985c6fe140c1ea6b7a6ba3c732-k8s-certs\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"ac87db985c6fe140c1ea6b7a6ba3c732\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.175134 kubelet[2274]: I0904 23:46:12.174980 2274 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.175949 kubelet[2274]: E0904 23:46:12.175498 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.91:6443/api/v1/nodes\": dial tcp 10.128.0.91:6443: connect: connection refused" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.222777 containerd[1483]: time="2025-09-04T23:46:12.222313906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,Uid:ac87db985c6fe140c1ea6b7a6ba3c732,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:12.238651 containerd[1483]: time="2025-09-04T23:46:12.237843232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,Uid:224fc0bd98074352a7613ab2fbe8519f,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:12.245121 containerd[1483]: time="2025-09-04T23:46:12.245067524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,Uid:89c70a3f244fb6f68946d4a02ea53fe8,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:12.373159 kubelet[2274]: E0904 23:46:12.373071 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699?timeout=10s\": dial tcp 10.128.0.91:6443: connect: connection refused" interval="800ms" Sep 4 23:46:12.581712 kubelet[2274]: I0904 23:46:12.581636 2274 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.582282 kubelet[2274]: E0904 23:46:12.582217 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.91:6443/api/v1/nodes\": dial tcp 10.128.0.91:6443: connect: connection refused" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:12.690968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount475392561.mount: Deactivated successfully. Sep 4 23:46:12.697626 containerd[1483]: time="2025-09-04T23:46:12.697545066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:12.701240 kubelet[2274]: W0904 23:46:12.701128 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.91:6443: connect: connection refused Sep 4 23:46:12.701240 kubelet[2274]: E0904 23:46:12.701195 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:12.702873 containerd[1483]: time="2025-09-04T23:46:12.702771139Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Sep 4 23:46:12.705536 containerd[1483]: time="2025-09-04T23:46:12.705454516Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:12.707086 containerd[1483]: time="2025-09-04T23:46:12.707033032Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:12.708974 containerd[1483]: time="2025-09-04T23:46:12.708568322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:46:12.711195 containerd[1483]: time="2025-09-04T23:46:12.710860788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:46:12.711195 containerd[1483]: time="2025-09-04T23:46:12.711014171Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:12.713974 containerd[1483]: time="2025-09-04T23:46:12.713070103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:12.717012 containerd[1483]: time="2025-09-04T23:46:12.716250976Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 493.791138ms" Sep 4 23:46:12.718739 containerd[1483]: time="2025-09-04T23:46:12.718682201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.67149ms" Sep 4 23:46:12.720059 containerd[1483]: time="2025-09-04T23:46:12.719695061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 474.4931ms" Sep 4 23:46:12.800066 kubelet[2274]: W0904 23:46:12.799882 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.91:6443: connect: connection refused Sep 4 23:46:12.800066 kubelet[2274]: E0904 23:46:12.800001 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:12.930528 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 23:46:12.942440 kubelet[2274]: W0904 23:46:12.941917 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699&limit=500&resourceVersion=0": dial tcp 10.128.0.91:6443: connect: connection refused Sep 4 23:46:12.942440 kubelet[2274]: E0904 23:46:12.942285 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699&limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:12.983862 containerd[1483]: time="2025-09-04T23:46:12.983332908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:12.983862 containerd[1483]: time="2025-09-04T23:46:12.983400382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:12.983862 containerd[1483]: time="2025-09-04T23:46:12.983426865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:12.983862 containerd[1483]: time="2025-09-04T23:46:12.982482201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:12.983862 containerd[1483]: time="2025-09-04T23:46:12.982571049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:12.983862 containerd[1483]: time="2025-09-04T23:46:12.982597804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:12.983862 containerd[1483]: time="2025-09-04T23:46:12.982728894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:12.985361 containerd[1483]: time="2025-09-04T23:46:12.983989080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:12.988450 containerd[1483]: time="2025-09-04T23:46:12.988280434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:12.988938 containerd[1483]: time="2025-09-04T23:46:12.988848614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:12.989049 containerd[1483]: time="2025-09-04T23:46:12.988969134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:12.990329 containerd[1483]: time="2025-09-04T23:46:12.990252695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:13.034196 systemd[1]: Started cri-containerd-7b68e78e0e8ad2ad9775d64c64b1b6655f836d75a232a944fcacabe0d654a920.scope - libcontainer container 7b68e78e0e8ad2ad9775d64c64b1b6655f836d75a232a944fcacabe0d654a920. Sep 4 23:46:13.050394 systemd[1]: Started cri-containerd-2a80255b7980db889ff7e39892fd9da7b3d8704c1965c5a4bdb1cfc619e91b50.scope - libcontainer container 2a80255b7980db889ff7e39892fd9da7b3d8704c1965c5a4bdb1cfc619e91b50. Sep 4 23:46:13.058339 systemd[1]: Started cri-containerd-7e85ed2e7a6b16cc19f0672586b3691cbac81506db59fe0c63c7412f7adf7cde.scope - libcontainer container 7e85ed2e7a6b16cc19f0672586b3691cbac81506db59fe0c63c7412f7adf7cde. Sep 4 23:46:13.157877 containerd[1483]: time="2025-09-04T23:46:13.157811471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,Uid:89c70a3f244fb6f68946d4a02ea53fe8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e85ed2e7a6b16cc19f0672586b3691cbac81506db59fe0c63c7412f7adf7cde\"" Sep 4 23:46:13.171455 kubelet[2274]: E0904 23:46:13.170757 2274 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911" Sep 4 23:46:13.174168 kubelet[2274]: E0904 23:46:13.173662 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699?timeout=10s\": dial tcp 10.128.0.91:6443: connect: connection refused" interval="1.6s" Sep 4 23:46:13.177404 containerd[1483]: time="2025-09-04T23:46:13.177333900Z" level=info msg="CreateContainer within sandbox \"7e85ed2e7a6b16cc19f0672586b3691cbac81506db59fe0c63c7412f7adf7cde\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:46:13.180230 containerd[1483]: time="2025-09-04T23:46:13.180100300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,Uid:ac87db985c6fe140c1ea6b7a6ba3c732,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a80255b7980db889ff7e39892fd9da7b3d8704c1965c5a4bdb1cfc619e91b50\"" Sep 4 23:46:13.181954 kubelet[2274]: E0904 23:46:13.181497 2274 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911" Sep 4 23:46:13.184079 containerd[1483]: time="2025-09-04T23:46:13.183896149Z" level=info msg="CreateContainer within sandbox \"2a80255b7980db889ff7e39892fd9da7b3d8704c1965c5a4bdb1cfc619e91b50\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:46:13.202029 containerd[1483]: time="2025-09-04T23:46:13.201944706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,Uid:224fc0bd98074352a7613ab2fbe8519f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b68e78e0e8ad2ad9775d64c64b1b6655f836d75a232a944fcacabe0d654a920\"" Sep 4 23:46:13.204308 kubelet[2274]: E0904 23:46:13.204257 2274 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def3" Sep 4 23:46:13.206670 containerd[1483]: time="2025-09-04T23:46:13.206614340Z" level=info msg="CreateContainer within sandbox \"7e85ed2e7a6b16cc19f0672586b3691cbac81506db59fe0c63c7412f7adf7cde\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cd5660c66efde037743e8a40e76e325d01ac423f6129e09d85e32f72c70a08f2\"" Sep 4 23:46:13.209309 containerd[1483]: time="2025-09-04T23:46:13.207653986Z" level=info msg="CreateContainer within sandbox \"7b68e78e0e8ad2ad9775d64c64b1b6655f836d75a232a944fcacabe0d654a920\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:46:13.211271 containerd[1483]: time="2025-09-04T23:46:13.211221161Z" level=info msg="StartContainer for \"cd5660c66efde037743e8a40e76e325d01ac423f6129e09d85e32f72c70a08f2\"" Sep 4 23:46:13.244367 containerd[1483]: time="2025-09-04T23:46:13.244305174Z" level=info msg="CreateContainer within sandbox \"2a80255b7980db889ff7e39892fd9da7b3d8704c1965c5a4bdb1cfc619e91b50\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c5a17a79c5a6a99d98a9796633bd698a1f59de3ad283d147e63f9090e93823fb\"" Sep 4 23:46:13.246377 containerd[1483]: time="2025-09-04T23:46:13.246329919Z" level=info msg="StartContainer for \"c5a17a79c5a6a99d98a9796633bd698a1f59de3ad283d147e63f9090e93823fb\"" Sep 4 23:46:13.251194 containerd[1483]: time="2025-09-04T23:46:13.251142029Z" level=info msg="CreateContainer within sandbox \"7b68e78e0e8ad2ad9775d64c64b1b6655f836d75a232a944fcacabe0d654a920\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4c9cd0a7d501ba653c2131f9d91670860c9bea14a4e566616bafd7e68942284d\"" Sep 4 23:46:13.252974 containerd[1483]: time="2025-09-04T23:46:13.252915132Z" level=info msg="StartContainer for \"4c9cd0a7d501ba653c2131f9d91670860c9bea14a4e566616bafd7e68942284d\"" Sep 4 23:46:13.259709 kubelet[2274]: W0904 23:46:13.259389 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.91:6443: connect: connection refused Sep 4 23:46:13.259709 kubelet[2274]: E0904 23:46:13.259490 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:13.265418 systemd[1]: Started cri-containerd-cd5660c66efde037743e8a40e76e325d01ac423f6129e09d85e32f72c70a08f2.scope - libcontainer container cd5660c66efde037743e8a40e76e325d01ac423f6129e09d85e32f72c70a08f2. Sep 4 23:46:13.342587 systemd[1]: Started cri-containerd-4c9cd0a7d501ba653c2131f9d91670860c9bea14a4e566616bafd7e68942284d.scope - libcontainer container 4c9cd0a7d501ba653c2131f9d91670860c9bea14a4e566616bafd7e68942284d. Sep 4 23:46:13.345714 systemd[1]: Started cri-containerd-c5a17a79c5a6a99d98a9796633bd698a1f59de3ad283d147e63f9090e93823fb.scope - libcontainer container c5a17a79c5a6a99d98a9796633bd698a1f59de3ad283d147e63f9090e93823fb. Sep 4 23:46:13.380379 containerd[1483]: time="2025-09-04T23:46:13.380300361Z" level=info msg="StartContainer for \"cd5660c66efde037743e8a40e76e325d01ac423f6129e09d85e32f72c70a08f2\" returns successfully" Sep 4 23:46:13.389148 kubelet[2274]: I0904 23:46:13.389104 2274 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:13.389621 kubelet[2274]: E0904 23:46:13.389548 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.91:6443/api/v1/nodes\": dial tcp 10.128.0.91:6443: connect: connection refused" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:13.442627 containerd[1483]: time="2025-09-04T23:46:13.441860357Z" level=info msg="StartContainer for \"c5a17a79c5a6a99d98a9796633bd698a1f59de3ad283d147e63f9090e93823fb\" returns successfully" Sep 4 23:46:13.501110 containerd[1483]: time="2025-09-04T23:46:13.501049242Z" level=info msg="StartContainer for \"4c9cd0a7d501ba653c2131f9d91670860c9bea14a4e566616bafd7e68942284d\" returns successfully" Sep 4 23:46:13.819877 kubelet[2274]: E0904 23:46:13.819826 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:13.820765 kubelet[2274]: E0904 23:46:13.820461 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:13.827715 kubelet[2274]: E0904 23:46:13.827384 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:14.833052 kubelet[2274]: E0904 23:46:14.831905 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:14.833052 kubelet[2274]: E0904 23:46:14.832819 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:14.995422 kubelet[2274]: I0904 23:46:14.995386 2274 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:16.535334 kubelet[2274]: E0904 23:46:16.535266 2274 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:16.656416 kubelet[2274]: E0904 23:46:16.656261 2274 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699.186239161b03b2a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,UID:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,},FirstTimestamp:2025-09-04 23:46:11.733205671 +0000 UTC m=+1.032482157,LastTimestamp:2025-09-04 23:46:11.733205671 +0000 UTC m=+1.032482157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,}" Sep 4 23:46:16.686732 kubelet[2274]: I0904 23:46:16.686677 2274 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:16.740042 kubelet[2274]: E0904 23:46:16.739873 2274 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699.186239161f1dc806 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,UID:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,},FirstTimestamp:2025-09-04 23:46:11.802023942 +0000 UTC m=+1.101300425,LastTimestamp:2025-09-04 23:46:11.802023942 +0000 UTC m=+1.101300425,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699,}" Sep 4 23:46:16.751066 kubelet[2274]: I0904 23:46:16.751019 2274 apiserver.go:52] "Watching apiserver" Sep 4 23:46:16.761140 kubelet[2274]: I0904 23:46:16.761093 2274 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:46:16.761362 kubelet[2274]: I0904 23:46:16.761168 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:16.778277 kubelet[2274]: E0904 23:46:16.777965 2274 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:16.778277 kubelet[2274]: I0904 23:46:16.778039 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:16.782305 kubelet[2274]: E0904 23:46:16.782032 2274 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:16.782305 kubelet[2274]: I0904 23:46:16.782071 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:16.791564 kubelet[2274]: E0904 23:46:16.791374 2274 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:18.680008 systemd[1]: Reload requested from client PID 2546 ('systemctl') (unit session-7.scope)... Sep 4 23:46:18.680033 systemd[1]: Reloading... Sep 4 23:46:18.857012 zram_generator::config[2594]: No configuration found. Sep 4 23:46:19.003963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:46:19.178068 systemd[1]: Reloading finished in 497 ms. Sep 4 23:46:19.218825 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:19.242717 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:46:19.243095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:19.243182 systemd[1]: kubelet.service: Consumed 1.617s CPU time, 133M memory peak. Sep 4 23:46:19.253473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:20.230072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:20.246478 (kubelet)[2639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:46:20.340195 kubelet[2639]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:20.340195 kubelet[2639]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:46:20.340195 kubelet[2639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:20.341104 kubelet[2639]: I0904 23:46:20.340361 2639 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:46:20.356623 kubelet[2639]: I0904 23:46:20.356539 2639 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:46:20.356623 kubelet[2639]: I0904 23:46:20.356588 2639 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:46:20.360975 kubelet[2639]: I0904 23:46:20.357231 2639 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:46:20.360975 kubelet[2639]: I0904 23:46:20.359690 2639 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:46:20.365049 kubelet[2639]: I0904 23:46:20.364999 2639 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:46:20.372321 kubelet[2639]: E0904 23:46:20.372265 2639 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:46:20.372321 kubelet[2639]: I0904 23:46:20.372319 2639 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:46:20.384475 kubelet[2639]: I0904 23:46:20.384420 2639 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:46:20.386217 kubelet[2639]: I0904 23:46:20.384816 2639 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:46:20.386217 kubelet[2639]: I0904 23:46:20.384882 2639 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:46:20.386217 kubelet[2639]: I0904 23:46:20.385248 2639 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:46:20.386217 kubelet[2639]: I0904 23:46:20.385268 2639 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:46:20.386647 kubelet[2639]: I0904 23:46:20.385356 2639 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:20.386647 kubelet[2639]: I0904 23:46:20.385596 2639 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:46:20.386647 kubelet[2639]: I0904 23:46:20.385633 2639 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:46:20.386647 kubelet[2639]: I0904 23:46:20.385671 2639 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:46:20.386647 kubelet[2639]: I0904 23:46:20.385690 2639 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:46:20.390554 kubelet[2639]: I0904 23:46:20.390161 2639 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:46:20.390960 kubelet[2639]: I0904 23:46:20.390869 2639 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:46:20.398958 kubelet[2639]: I0904 23:46:20.397527 2639 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:46:20.398958 kubelet[2639]: I0904 23:46:20.397678 2639 server.go:1287] "Started kubelet" Sep 4 23:46:20.400600 kubelet[2639]: I0904 23:46:20.400507 2639 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:46:20.401510 kubelet[2639]: I0904 23:46:20.401479 2639 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:46:20.408845 kubelet[2639]: I0904 23:46:20.408799 2639 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:46:20.415849 kubelet[2639]: I0904 23:46:20.415767 2639 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:46:20.420950 kubelet[2639]: I0904 23:46:20.420825 2639 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:46:20.421274 kubelet[2639]: I0904 23:46:20.421247 2639 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:46:20.421680 kubelet[2639]: I0904 23:46:20.421659 2639 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:46:20.422257 kubelet[2639]: E0904 23:46:20.422227 2639 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" not found" Sep 4 23:46:20.427190 kubelet[2639]: I0904 23:46:20.427157 2639 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:46:20.427613 kubelet[2639]: I0904 23:46:20.427594 2639 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:46:20.431843 kubelet[2639]: I0904 23:46:20.431780 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:46:20.433902 sudo[2653]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:46:20.434579 sudo[2653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:46:20.441380 kubelet[2639]: I0904 23:46:20.440498 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:46:20.441380 kubelet[2639]: I0904 23:46:20.440567 2639 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:46:20.441380 kubelet[2639]: I0904 23:46:20.440601 2639 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:46:20.441380 kubelet[2639]: I0904 23:46:20.440614 2639 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:46:20.441380 kubelet[2639]: E0904 23:46:20.440699 2639 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:46:20.479321 kubelet[2639]: I0904 23:46:20.479278 2639 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:46:20.479593 kubelet[2639]: I0904 23:46:20.479576 2639 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:46:20.479856 kubelet[2639]: I0904 23:46:20.479827 2639 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:46:20.490245 kubelet[2639]: E0904 23:46:20.490066 2639 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:46:20.541603 kubelet[2639]: E0904 23:46:20.541161 2639 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:46:20.617754 kubelet[2639]: I0904 23:46:20.616976 2639 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:46:20.617754 kubelet[2639]: I0904 23:46:20.617007 2639 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:46:20.617754 kubelet[2639]: I0904 23:46:20.617044 2639 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:20.617754 kubelet[2639]: I0904 23:46:20.617370 2639 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:46:20.617754 kubelet[2639]: I0904 23:46:20.617393 2639 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:46:20.620109 kubelet[2639]: I0904 23:46:20.618968 2639 policy_none.go:49] "None policy: Start" Sep 4 23:46:20.620109 kubelet[2639]: I0904 23:46:20.619038 2639 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:46:20.620109 kubelet[2639]: I0904 23:46:20.619098 2639 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:46:20.620109 kubelet[2639]: I0904 23:46:20.619500 2639 state_mem.go:75] "Updated machine memory state" Sep 4 23:46:20.632966 kubelet[2639]: I0904 23:46:20.632469 2639 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:46:20.632966 kubelet[2639]: I0904 23:46:20.632795 2639 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:46:20.632966 kubelet[2639]: I0904 23:46:20.632814 2639 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:46:20.636912 kubelet[2639]: I0904 23:46:20.636878 2639 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:46:20.640089 kubelet[2639]: E0904 23:46:20.639485 2639 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:46:20.745074 kubelet[2639]: I0904 23:46:20.742722 2639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.746769 kubelet[2639]: I0904 23:46:20.745738 2639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.746769 kubelet[2639]: I0904 23:46:20.745974 2639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.760414 kubelet[2639]: I0904 23:46:20.760306 2639 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.768042 kubelet[2639]: W0904 23:46:20.767994 2639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 4 23:46:20.780852 kubelet[2639]: W0904 23:46:20.780797 2639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 4 23:46:20.782111 kubelet[2639]: W0904 23:46:20.781723 2639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 4 23:46:20.796951 kubelet[2639]: I0904 23:46:20.794734 2639 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.796951 kubelet[2639]: I0904 23:46:20.794987 2639 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.832187 kubelet[2639]: I0904 23:46:20.832119 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89c70a3f244fb6f68946d4a02ea53fe8-kubeconfig\") pod \"kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"89c70a3f244fb6f68946d4a02ea53fe8\") " pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.832187 kubelet[2639]: I0904 23:46:20.832188 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac87db985c6fe140c1ea6b7a6ba3c732-ca-certs\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"ac87db985c6fe140c1ea6b7a6ba3c732\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.832495 kubelet[2639]: I0904 23:46:20.832268 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac87db985c6fe140c1ea6b7a6ba3c732-k8s-certs\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"ac87db985c6fe140c1ea6b7a6ba3c732\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.832495 kubelet[2639]: I0904 23:46:20.832305 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac87db985c6fe140c1ea6b7a6ba3c732-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"ac87db985c6fe140c1ea6b7a6ba3c732\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.832495 kubelet[2639]: I0904 23:46:20.832339 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/224fc0bd98074352a7613ab2fbe8519f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"224fc0bd98074352a7613ab2fbe8519f\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.832495 kubelet[2639]: I0904 23:46:20.832366 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/224fc0bd98074352a7613ab2fbe8519f-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"224fc0bd98074352a7613ab2fbe8519f\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.832691 kubelet[2639]: I0904 23:46:20.832393 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/224fc0bd98074352a7613ab2fbe8519f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"224fc0bd98074352a7613ab2fbe8519f\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.832691 kubelet[2639]: I0904 23:46:20.832423 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/224fc0bd98074352a7613ab2fbe8519f-ca-certs\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"224fc0bd98074352a7613ab2fbe8519f\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:20.832691 kubelet[2639]: I0904 23:46:20.832465 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/224fc0bd98074352a7613ab2fbe8519f-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" (UID: \"224fc0bd98074352a7613ab2fbe8519f\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:21.374687 sudo[2653]: pam_unix(sudo:session): session closed for user root Sep 4 23:46:21.398393 kubelet[2639]: I0904 23:46:21.397942 2639 apiserver.go:52] "Watching apiserver" Sep 4 23:46:21.427674 kubelet[2639]: I0904 23:46:21.427557 2639 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:46:21.545265 kubelet[2639]: I0904 23:46:21.545217 2639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:21.561705 kubelet[2639]: W0904 23:46:21.561657 2639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 4 23:46:21.562054 kubelet[2639]: E0904 23:46:21.561865 2639 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" Sep 4 23:46:21.617180 kubelet[2639]: I0904 23:46:21.617070 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" podStartSLOduration=1.617046397 podStartE2EDuration="1.617046397s" podCreationTimestamp="2025-09-04 23:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:21.616277532 +0000 UTC m=+1.358911810" watchObservedRunningTime="2025-09-04 23:46:21.617046397 +0000 UTC m=+1.359680497" Sep 4 23:46:21.636340 kubelet[2639]: I0904 23:46:21.636084 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" podStartSLOduration=1.636058204 podStartE2EDuration="1.636058204s" podCreationTimestamp="2025-09-04 23:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:21.63502124 +0000 UTC m=+1.377655346" watchObservedRunningTime="2025-09-04 23:46:21.636058204 +0000 UTC m=+1.378692300" Sep 4 23:46:21.651714 kubelet[2639]: I0904 23:46:21.651637 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699" podStartSLOduration=1.651615499 podStartE2EDuration="1.651615499s" podCreationTimestamp="2025-09-04 23:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:21.650341183 +0000 UTC m=+1.392975284" watchObservedRunningTime="2025-09-04 23:46:21.651615499 +0000 UTC m=+1.394249599" Sep 4 23:46:23.390380 sudo[1729]: pam_unix(sudo:session): session closed for user root Sep 4 23:46:23.435958 sshd[1728]: Connection closed by 139.178.68.195 port 54710 Sep 4 23:46:23.435094 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:23.458110 systemd[1]: sshd@6-10.128.0.91:22-139.178.68.195:54710.service: Deactivated successfully. Sep 4 23:46:23.466512 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:46:23.467512 systemd[1]: session-7.scope: Consumed 7.351s CPU time, 264.9M memory peak. Sep 4 23:46:23.474003 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:46:23.476878 systemd-logind[1462]: Removed session 7. Sep 4 23:46:25.250908 kubelet[2639]: I0904 23:46:25.250838 2639 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:46:25.252035 containerd[1483]: time="2025-09-04T23:46:25.251965759Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:46:25.252524 kubelet[2639]: I0904 23:46:25.252409 2639 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:46:25.933193 systemd[1]: Created slice kubepods-besteffort-pod3212dfef_3a63_403b_b8e6_793e9249eb47.slice - libcontainer container kubepods-besteffort-pod3212dfef_3a63_403b_b8e6_793e9249eb47.slice. Sep 4 23:46:25.970856 kubelet[2639]: I0904 23:46:25.970451 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3212dfef-3a63-403b-b8e6-793e9249eb47-lib-modules\") pod \"kube-proxy-2595z\" (UID: \"3212dfef-3a63-403b-b8e6-793e9249eb47\") " pod="kube-system/kube-proxy-2595z" Sep 4 23:46:25.970856 kubelet[2639]: I0904 23:46:25.970633 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-etc-cni-netd\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.970856 kubelet[2639]: I0904 23:46:25.970786 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-xtables-lock\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.970856 kubelet[2639]: I0904 23:46:25.970821 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bc80af6-e3eb-49be-95e4-f1dc275b5747-hubble-tls\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.974220 kubelet[2639]: I0904 23:46:25.973879 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-host-proc-sys-net\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.974220 kubelet[2639]: I0904 23:46:25.974067 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-cgroup\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.974598 kubelet[2639]: I0904 23:46:25.974105 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cni-path\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.974969 kubelet[2639]: I0904 23:46:25.974785 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bc80af6-e3eb-49be-95e4-f1dc275b5747-clustermesh-secrets\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.975097 kubelet[2639]: I0904 23:46:25.975027 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3212dfef-3a63-403b-b8e6-793e9249eb47-xtables-lock\") pod \"kube-proxy-2595z\" (UID: \"3212dfef-3a63-403b-b8e6-793e9249eb47\") " pod="kube-system/kube-proxy-2595z" Sep 4 23:46:25.975732 kubelet[2639]: I0904 23:46:25.975082 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w768\" (UniqueName: \"kubernetes.io/projected/3212dfef-3a63-403b-b8e6-793e9249eb47-kube-api-access-2w768\") pod \"kube-proxy-2595z\" (UID: \"3212dfef-3a63-403b-b8e6-793e9249eb47\") " pod="kube-system/kube-proxy-2595z" Sep 4 23:46:25.977668 kubelet[2639]: I0904 23:46:25.977623 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-run\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.977799 kubelet[2639]: I0904 23:46:25.977761 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3212dfef-3a63-403b-b8e6-793e9249eb47-kube-proxy\") pod \"kube-proxy-2595z\" (UID: \"3212dfef-3a63-403b-b8e6-793e9249eb47\") " pod="kube-system/kube-proxy-2595z" Sep 4 23:46:25.977880 kubelet[2639]: I0904 23:46:25.977860 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-lib-modules\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.977959 kubelet[2639]: I0904 23:46:25.977913 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-host-proc-sys-kernel\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.978057 kubelet[2639]: I0904 23:46:25.978009 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-bpf-maps\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.978160 kubelet[2639]: I0904 23:46:25.978096 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-hostproc\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.978227 kubelet[2639]: I0904 23:46:25.978155 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-config-path\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.978309 kubelet[2639]: I0904 23:46:25.978254 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djc46\" (UniqueName: \"kubernetes.io/projected/3bc80af6-e3eb-49be-95e4-f1dc275b5747-kube-api-access-djc46\") pod \"cilium-bcn7v\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " pod="kube-system/cilium-bcn7v" Sep 4 23:46:25.983002 systemd[1]: Created slice kubepods-burstable-pod3bc80af6_e3eb_49be_95e4_f1dc275b5747.slice - libcontainer container kubepods-burstable-pod3bc80af6_e3eb_49be_95e4_f1dc275b5747.slice. Sep 4 23:46:26.248206 containerd[1483]: time="2025-09-04T23:46:26.247568401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2595z,Uid:3212dfef-3a63-403b-b8e6-793e9249eb47,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:26.284791 containerd[1483]: time="2025-09-04T23:46:26.284671022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:26.286996 containerd[1483]: time="2025-09-04T23:46:26.286761836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:26.286996 containerd[1483]: time="2025-09-04T23:46:26.286809400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:26.287449 containerd[1483]: time="2025-09-04T23:46:26.287305266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:26.299273 containerd[1483]: time="2025-09-04T23:46:26.298739497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bcn7v,Uid:3bc80af6-e3eb-49be-95e4-f1dc275b5747,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:26.355251 systemd[1]: Created slice kubepods-besteffort-pod8f2706c7_b85e_44ec_8208_608cb4a3da92.slice - libcontainer container kubepods-besteffort-pod8f2706c7_b85e_44ec_8208_608cb4a3da92.slice. Sep 4 23:46:26.370223 systemd[1]: Started cri-containerd-70ff37fe868b4b32fb9d39b77473f7574efb8595b4926273b3ec6d09f9e1e58b.scope - libcontainer container 70ff37fe868b4b32fb9d39b77473f7574efb8595b4926273b3ec6d09f9e1e58b. Sep 4 23:46:26.381750 kubelet[2639]: I0904 23:46:26.381701 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f2706c7-b85e-44ec-8208-608cb4a3da92-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xmd6h\" (UID: \"8f2706c7-b85e-44ec-8208-608cb4a3da92\") " pod="kube-system/cilium-operator-6c4d7847fc-xmd6h" Sep 4 23:46:26.382915 kubelet[2639]: I0904 23:46:26.382822 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw4pv\" (UniqueName: \"kubernetes.io/projected/8f2706c7-b85e-44ec-8208-608cb4a3da92-kube-api-access-fw4pv\") pod \"cilium-operator-6c4d7847fc-xmd6h\" (UID: \"8f2706c7-b85e-44ec-8208-608cb4a3da92\") " pod="kube-system/cilium-operator-6c4d7847fc-xmd6h" Sep 4 23:46:26.420039 containerd[1483]: time="2025-09-04T23:46:26.416144121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:26.420039 containerd[1483]: time="2025-09-04T23:46:26.416295325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:26.420039 containerd[1483]: time="2025-09-04T23:46:26.416323170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:26.420039 containerd[1483]: time="2025-09-04T23:46:26.417414703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:26.450571 systemd[1]: Started cri-containerd-175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3.scope - libcontainer container 175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3. Sep 4 23:46:26.479335 containerd[1483]: time="2025-09-04T23:46:26.479237698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2595z,Uid:3212dfef-3a63-403b-b8e6-793e9249eb47,Namespace:kube-system,Attempt:0,} returns sandbox id \"70ff37fe868b4b32fb9d39b77473f7574efb8595b4926273b3ec6d09f9e1e58b\"" Sep 4 23:46:26.483777 containerd[1483]: time="2025-09-04T23:46:26.483058521Z" level=info msg="CreateContainer within sandbox \"70ff37fe868b4b32fb9d39b77473f7574efb8595b4926273b3ec6d09f9e1e58b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:46:26.517801 containerd[1483]: time="2025-09-04T23:46:26.517002776Z" level=info msg="CreateContainer within sandbox \"70ff37fe868b4b32fb9d39b77473f7574efb8595b4926273b3ec6d09f9e1e58b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2957c25e34cdc7eb9987fed009cda2e65975b5fc73fda1b240a927cb6ee7e0d\"" Sep 4 23:46:26.520950 containerd[1483]: time="2025-09-04T23:46:26.519899496Z" level=info msg="StartContainer for \"b2957c25e34cdc7eb9987fed009cda2e65975b5fc73fda1b240a927cb6ee7e0d\"" Sep 4 23:46:26.532107 containerd[1483]: time="2025-09-04T23:46:26.532052474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bcn7v,Uid:3bc80af6-e3eb-49be-95e4-f1dc275b5747,Namespace:kube-system,Attempt:0,} returns sandbox id \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\"" Sep 4 23:46:26.537294 containerd[1483]: time="2025-09-04T23:46:26.537242885Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:46:26.578379 systemd[1]: Started cri-containerd-b2957c25e34cdc7eb9987fed009cda2e65975b5fc73fda1b240a927cb6ee7e0d.scope - libcontainer container b2957c25e34cdc7eb9987fed009cda2e65975b5fc73fda1b240a927cb6ee7e0d. Sep 4 23:46:26.631785 containerd[1483]: time="2025-09-04T23:46:26.631722816Z" level=info msg="StartContainer for \"b2957c25e34cdc7eb9987fed009cda2e65975b5fc73fda1b240a927cb6ee7e0d\" returns successfully" Sep 4 23:46:26.671278 containerd[1483]: time="2025-09-04T23:46:26.671220363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xmd6h,Uid:8f2706c7-b85e-44ec-8208-608cb4a3da92,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:26.714448 containerd[1483]: time="2025-09-04T23:46:26.714037796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:26.714448 containerd[1483]: time="2025-09-04T23:46:26.714156243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:26.714448 containerd[1483]: time="2025-09-04T23:46:26.714185770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:26.714448 containerd[1483]: time="2025-09-04T23:46:26.714310276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:26.748173 systemd[1]: Started cri-containerd-b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071.scope - libcontainer container b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071. Sep 4 23:46:26.834894 containerd[1483]: time="2025-09-04T23:46:26.834771832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xmd6h,Uid:8f2706c7-b85e-44ec-8208-608cb4a3da92,Namespace:kube-system,Attempt:0,} returns sandbox id \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\"" Sep 4 23:46:27.430780 update_engine[1465]: I20250904 23:46:27.430696 1465 update_attempter.cc:509] Updating boot flags... Sep 4 23:46:27.506182 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3006) Sep 4 23:46:27.735957 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3007) Sep 4 23:46:28.873031 kubelet[2639]: I0904 23:46:28.872952 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2595z" podStartSLOduration=3.872909621 podStartE2EDuration="3.872909621s" podCreationTimestamp="2025-09-04 23:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:27.617528656 +0000 UTC m=+7.360162756" watchObservedRunningTime="2025-09-04 23:46:28.872909621 +0000 UTC m=+8.615543721" Sep 4 23:46:33.100045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241148811.mount: Deactivated successfully. Sep 4 23:46:36.146855 containerd[1483]: time="2025-09-04T23:46:36.146789675Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:36.148566 containerd[1483]: time="2025-09-04T23:46:36.148454180Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 23:46:36.149770 containerd[1483]: time="2025-09-04T23:46:36.149260144Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:36.151568 containerd[1483]: time="2025-09-04T23:46:36.151527805Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.614226944s" Sep 4 23:46:36.151765 containerd[1483]: time="2025-09-04T23:46:36.151737741Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 23:46:36.155671 containerd[1483]: time="2025-09-04T23:46:36.155631143Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:46:36.156992 containerd[1483]: time="2025-09-04T23:46:36.156952491Z" level=info msg="CreateContainer within sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:46:36.181299 containerd[1483]: time="2025-09-04T23:46:36.181239480Z" level=info msg="CreateContainer within sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb\"" Sep 4 23:46:36.184788 containerd[1483]: time="2025-09-04T23:46:36.182073844Z" level=info msg="StartContainer for \"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb\"" Sep 4 23:46:36.232158 systemd[1]: Started cri-containerd-153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb.scope - libcontainer container 153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb. Sep 4 23:46:36.275786 containerd[1483]: time="2025-09-04T23:46:36.275701725Z" level=info msg="StartContainer for \"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb\" returns successfully" Sep 4 23:46:36.293698 systemd[1]: cri-containerd-153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb.scope: Deactivated successfully. Sep 4 23:46:37.172163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb-rootfs.mount: Deactivated successfully. Sep 4 23:46:38.120215 containerd[1483]: time="2025-09-04T23:46:38.120085512Z" level=info msg="shim disconnected" id=153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb namespace=k8s.io Sep 4 23:46:38.120215 containerd[1483]: time="2025-09-04T23:46:38.120194391Z" level=warning msg="cleaning up after shim disconnected" id=153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb namespace=k8s.io Sep 4 23:46:38.120215 containerd[1483]: time="2025-09-04T23:46:38.120211452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:38.639463 containerd[1483]: time="2025-09-04T23:46:38.639386290Z" level=info msg="CreateContainer within sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:46:38.667748 containerd[1483]: time="2025-09-04T23:46:38.667459763Z" level=info msg="CreateContainer within sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901\"" Sep 4 23:46:38.670286 containerd[1483]: time="2025-09-04T23:46:38.670222638Z" level=info msg="StartContainer for \"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901\"" Sep 4 23:46:38.734637 systemd[1]: run-containerd-runc-k8s.io-81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901-runc.ooadsl.mount: Deactivated successfully. Sep 4 23:46:38.746210 systemd[1]: Started cri-containerd-81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901.scope - libcontainer container 81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901. Sep 4 23:46:38.803093 containerd[1483]: time="2025-09-04T23:46:38.803022470Z" level=info msg="StartContainer for \"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901\" returns successfully" Sep 4 23:46:38.819463 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:46:38.820572 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:38.820971 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:38.831121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:38.831577 systemd[1]: cri-containerd-81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901.scope: Deactivated successfully. Sep 4 23:46:38.878118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:38.886796 containerd[1483]: time="2025-09-04T23:46:38.886706085Z" level=info msg="shim disconnected" id=81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901 namespace=k8s.io Sep 4 23:46:38.886796 containerd[1483]: time="2025-09-04T23:46:38.886791927Z" level=warning msg="cleaning up after shim disconnected" id=81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901 namespace=k8s.io Sep 4 23:46:38.886796 containerd[1483]: time="2025-09-04T23:46:38.886805608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:39.647248 containerd[1483]: time="2025-09-04T23:46:39.647194171Z" level=info msg="CreateContainer within sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:46:39.663100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901-rootfs.mount: Deactivated successfully. Sep 4 23:46:39.707656 containerd[1483]: time="2025-09-04T23:46:39.707499891Z" level=info msg="CreateContainer within sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d\"" Sep 4 23:46:39.710894 containerd[1483]: time="2025-09-04T23:46:39.710844482Z" level=info msg="StartContainer for \"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d\"" Sep 4 23:46:39.774284 systemd[1]: Started cri-containerd-1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d.scope - libcontainer container 1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d. Sep 4 23:46:39.854353 systemd[1]: cri-containerd-1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d.scope: Deactivated successfully. Sep 4 23:46:39.857814 containerd[1483]: time="2025-09-04T23:46:39.857722848Z" level=info msg="StartContainer for \"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d\" returns successfully" Sep 4 23:46:39.938221 containerd[1483]: time="2025-09-04T23:46:39.937540899Z" level=info msg="shim disconnected" id=1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d namespace=k8s.io Sep 4 23:46:39.938221 containerd[1483]: time="2025-09-04T23:46:39.937874795Z" level=warning msg="cleaning up after shim disconnected" id=1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d namespace=k8s.io Sep 4 23:46:39.938221 containerd[1483]: time="2025-09-04T23:46:39.937896141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:40.606814 containerd[1483]: time="2025-09-04T23:46:40.606736487Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:40.608183 containerd[1483]: time="2025-09-04T23:46:40.608007835Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 23:46:40.609978 containerd[1483]: time="2025-09-04T23:46:40.609079509Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:40.611624 containerd[1483]: time="2025-09-04T23:46:40.611430614Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.45575014s" Sep 4 23:46:40.611624 containerd[1483]: time="2025-09-04T23:46:40.611484210Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 23:46:40.615232 containerd[1483]: time="2025-09-04T23:46:40.614893283Z" level=info msg="CreateContainer within sandbox \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:46:40.635642 containerd[1483]: time="2025-09-04T23:46:40.635572980Z" level=info msg="CreateContainer within sandbox \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\"" Sep 4 23:46:40.638036 containerd[1483]: time="2025-09-04T23:46:40.636424999Z" level=info msg="StartContainer for \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\"" Sep 4 23:46:40.660661 containerd[1483]: time="2025-09-04T23:46:40.659854639Z" level=info msg="CreateContainer within sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:46:40.662722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d-rootfs.mount: Deactivated successfully. Sep 4 23:46:40.722223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214163013.mount: Deactivated successfully. Sep 4 23:46:40.732040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303303956.mount: Deactivated successfully. Sep 4 23:46:40.734859 containerd[1483]: time="2025-09-04T23:46:40.734676513Z" level=info msg="CreateContainer within sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45\"" Sep 4 23:46:40.737023 containerd[1483]: time="2025-09-04T23:46:40.736975998Z" level=info msg="StartContainer for \"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45\"" Sep 4 23:46:40.752609 systemd[1]: Started cri-containerd-51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab.scope - libcontainer container 51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab. Sep 4 23:46:40.793386 systemd[1]: Started cri-containerd-27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45.scope - libcontainer container 27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45. Sep 4 23:46:40.827252 containerd[1483]: time="2025-09-04T23:46:40.827085571Z" level=info msg="StartContainer for \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\" returns successfully" Sep 4 23:46:40.871693 containerd[1483]: time="2025-09-04T23:46:40.870474145Z" level=info msg="StartContainer for \"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45\" returns successfully" Sep 4 23:46:40.875011 systemd[1]: cri-containerd-27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45.scope: Deactivated successfully. Sep 4 23:46:41.065089 containerd[1483]: time="2025-09-04T23:46:41.064891739Z" level=info msg="shim disconnected" id=27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45 namespace=k8s.io Sep 4 23:46:41.066077 containerd[1483]: time="2025-09-04T23:46:41.065778981Z" level=warning msg="cleaning up after shim disconnected" id=27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45 namespace=k8s.io Sep 4 23:46:41.066077 containerd[1483]: time="2025-09-04T23:46:41.065812878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:41.115163 containerd[1483]: time="2025-09-04T23:46:41.113816839Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:46:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:46:41.673141 containerd[1483]: time="2025-09-04T23:46:41.672715410Z" level=info msg="CreateContainer within sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:46:41.714977 containerd[1483]: time="2025-09-04T23:46:41.712613798Z" level=info msg="CreateContainer within sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\"" Sep 4 23:46:41.714977 containerd[1483]: time="2025-09-04T23:46:41.713275972Z" level=info msg="StartContainer for \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\"" Sep 4 23:46:41.789597 systemd[1]: Started cri-containerd-618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a.scope - libcontainer container 618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a. Sep 4 23:46:41.934336 containerd[1483]: time="2025-09-04T23:46:41.934129679Z" level=info msg="StartContainer for \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\" returns successfully" Sep 4 23:46:42.230759 kubelet[2639]: I0904 23:46:42.230227 2639 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:46:42.336330 kubelet[2639]: I0904 23:46:42.334765 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xmd6h" podStartSLOduration=2.5604534169999997 podStartE2EDuration="16.334720791s" podCreationTimestamp="2025-09-04 23:46:26 +0000 UTC" firstStartedPulling="2025-09-04 23:46:26.838470564 +0000 UTC m=+6.581104647" lastFinishedPulling="2025-09-04 23:46:40.612737942 +0000 UTC m=+20.355372021" observedRunningTime="2025-09-04 23:46:41.942016749 +0000 UTC m=+21.684650849" watchObservedRunningTime="2025-09-04 23:46:42.334720791 +0000 UTC m=+22.077354897" Sep 4 23:46:42.353857 systemd[1]: Created slice kubepods-burstable-pod799d68fb_5aab_4880_8478_ee782da10c50.slice - libcontainer container kubepods-burstable-pod799d68fb_5aab_4880_8478_ee782da10c50.slice. Sep 4 23:46:42.365464 systemd[1]: Created slice kubepods-burstable-pod1ddcd78b_47cd_4dd6_8941_5dde374e9a89.slice - libcontainer container kubepods-burstable-pod1ddcd78b_47cd_4dd6_8941_5dde374e9a89.slice. Sep 4 23:46:42.422132 kubelet[2639]: I0904 23:46:42.421906 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jblf\" (UniqueName: \"kubernetes.io/projected/1ddcd78b-47cd-4dd6-8941-5dde374e9a89-kube-api-access-6jblf\") pod \"coredns-668d6bf9bc-nlq9f\" (UID: \"1ddcd78b-47cd-4dd6-8941-5dde374e9a89\") " pod="kube-system/coredns-668d6bf9bc-nlq9f" Sep 4 23:46:42.422132 kubelet[2639]: I0904 23:46:42.422070 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ddcd78b-47cd-4dd6-8941-5dde374e9a89-config-volume\") pod \"coredns-668d6bf9bc-nlq9f\" (UID: \"1ddcd78b-47cd-4dd6-8941-5dde374e9a89\") " pod="kube-system/coredns-668d6bf9bc-nlq9f" Sep 4 23:46:42.422531 kubelet[2639]: I0904 23:46:42.422238 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/799d68fb-5aab-4880-8478-ee782da10c50-config-volume\") pod \"coredns-668d6bf9bc-7cpqh\" (UID: \"799d68fb-5aab-4880-8478-ee782da10c50\") " pod="kube-system/coredns-668d6bf9bc-7cpqh" Sep 4 23:46:42.424070 kubelet[2639]: I0904 23:46:42.423977 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hczbb\" (UniqueName: \"kubernetes.io/projected/799d68fb-5aab-4880-8478-ee782da10c50-kube-api-access-hczbb\") pod \"coredns-668d6bf9bc-7cpqh\" (UID: \"799d68fb-5aab-4880-8478-ee782da10c50\") " pod="kube-system/coredns-668d6bf9bc-7cpqh" Sep 4 23:46:42.664314 containerd[1483]: time="2025-09-04T23:46:42.664261340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7cpqh,Uid:799d68fb-5aab-4880-8478-ee782da10c50,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:42.673452 containerd[1483]: time="2025-09-04T23:46:42.673000271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nlq9f,Uid:1ddcd78b-47cd-4dd6-8941-5dde374e9a89,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:42.776547 kubelet[2639]: I0904 23:46:42.776463 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bcn7v" podStartSLOduration=8.15849427 podStartE2EDuration="17.776433861s" podCreationTimestamp="2025-09-04 23:46:25 +0000 UTC" firstStartedPulling="2025-09-04 23:46:26.535368911 +0000 UTC m=+6.278003001" lastFinishedPulling="2025-09-04 23:46:36.153308506 +0000 UTC m=+15.895942592" observedRunningTime="2025-09-04 23:46:42.773560455 +0000 UTC m=+22.516194581" watchObservedRunningTime="2025-09-04 23:46:42.776433861 +0000 UTC m=+22.519067952" Sep 4 23:46:44.661667 systemd-networkd[1384]: cilium_host: Link UP Sep 4 23:46:44.663360 systemd-networkd[1384]: cilium_net: Link UP Sep 4 23:46:44.664351 systemd-networkd[1384]: cilium_net: Gained carrier Sep 4 23:46:44.667313 systemd-networkd[1384]: cilium_host: Gained carrier Sep 4 23:46:44.676626 systemd-networkd[1384]: cilium_net: Gained IPv6LL Sep 4 23:46:44.871019 systemd-networkd[1384]: cilium_vxlan: Link UP Sep 4 23:46:44.871040 systemd-networkd[1384]: cilium_vxlan: Gained carrier Sep 4 23:46:45.273631 kernel: NET: Registered PF_ALG protocol family Sep 4 23:46:45.626031 systemd-networkd[1384]: cilium_host: Gained IPv6LL Sep 4 23:46:46.223222 systemd-networkd[1384]: lxc_health: Link UP Sep 4 23:46:46.230888 systemd-networkd[1384]: lxc_health: Gained carrier Sep 4 23:46:46.521146 systemd-networkd[1384]: cilium_vxlan: Gained IPv6LL Sep 4 23:46:46.811051 kernel: eth0: renamed from tmpc9f3c Sep 4 23:46:46.819020 systemd-networkd[1384]: lxcdb10c4b8175d: Link UP Sep 4 23:46:46.834889 systemd-networkd[1384]: lxcdb10c4b8175d: Gained carrier Sep 4 23:46:46.881731 kernel: eth0: renamed from tmp20e3a Sep 4 23:46:46.880317 systemd-networkd[1384]: lxcbbba8f13d3d7: Link UP Sep 4 23:46:46.894160 systemd-networkd[1384]: lxcbbba8f13d3d7: Gained carrier Sep 4 23:46:47.352156 systemd-networkd[1384]: lxc_health: Gained IPv6LL Sep 4 23:46:48.120262 systemd-networkd[1384]: lxcbbba8f13d3d7: Gained IPv6LL Sep 4 23:46:48.248708 systemd-networkd[1384]: lxcdb10c4b8175d: Gained IPv6LL Sep 4 23:46:51.017033 ntpd[1444]: Listen normally on 8 cilium_host 192.168.0.140:123 Sep 4 23:46:51.018264 ntpd[1444]: 4 Sep 23:46:51 ntpd[1444]: Listen normally on 8 cilium_host 192.168.0.140:123 Sep 4 23:46:51.018264 ntpd[1444]: 4 Sep 23:46:51 ntpd[1444]: Listen normally on 9 cilium_net [fe80::8867:e9ff:fea3:7748%4]:123 Sep 4 23:46:51.018264 ntpd[1444]: 4 Sep 23:46:51 ntpd[1444]: Listen normally on 10 cilium_host [fe80::b86e:dbff:fee4:a4e%5]:123 Sep 4 23:46:51.018264 ntpd[1444]: 4 Sep 23:46:51 ntpd[1444]: Listen normally on 11 cilium_vxlan [fe80::b8a7:10ff:fe20:8e30%6]:123 Sep 4 23:46:51.018264 ntpd[1444]: 4 Sep 23:46:51 ntpd[1444]: Listen normally on 12 lxc_health [fe80::1cc6:caff:fe39:3fa1%8]:123 Sep 4 23:46:51.018264 ntpd[1444]: 4 Sep 23:46:51 ntpd[1444]: Listen normally on 13 lxcdb10c4b8175d [fe80::d4b3:baff:fe68:d426%10]:123 Sep 4 23:46:51.018264 ntpd[1444]: 4 Sep 23:46:51 ntpd[1444]: Listen normally on 14 lxcbbba8f13d3d7 [fe80::5416:7eff:feb5:353d%12]:123 Sep 4 23:46:51.017181 ntpd[1444]: Listen normally on 9 cilium_net [fe80::8867:e9ff:fea3:7748%4]:123 Sep 4 23:46:51.017278 ntpd[1444]: Listen normally on 10 cilium_host [fe80::b86e:dbff:fee4:a4e%5]:123 Sep 4 23:46:51.017349 ntpd[1444]: Listen normally on 11 cilium_vxlan [fe80::b8a7:10ff:fe20:8e30%6]:123 Sep 4 23:46:51.017412 ntpd[1444]: Listen normally on 12 lxc_health [fe80::1cc6:caff:fe39:3fa1%8]:123 Sep 4 23:46:51.017473 ntpd[1444]: Listen normally on 13 lxcdb10c4b8175d [fe80::d4b3:baff:fe68:d426%10]:123 Sep 4 23:46:51.017541 ntpd[1444]: Listen normally on 14 lxcbbba8f13d3d7 [fe80::5416:7eff:feb5:353d%12]:123 Sep 4 23:46:52.197627 kubelet[2639]: I0904 23:46:52.197268 2639 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 23:46:52.435202 containerd[1483]: time="2025-09-04T23:46:52.434509647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:52.435202 containerd[1483]: time="2025-09-04T23:46:52.434619054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:52.435202 containerd[1483]: time="2025-09-04T23:46:52.434650305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:52.435202 containerd[1483]: time="2025-09-04T23:46:52.434807689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:52.494866 systemd[1]: Started cri-containerd-c9f3c20ec1889fca136d417c74d382cee91b06e210a63ebbddd689ed28459507.scope - libcontainer container c9f3c20ec1889fca136d417c74d382cee91b06e210a63ebbddd689ed28459507. Sep 4 23:46:52.561470 containerd[1483]: time="2025-09-04T23:46:52.561277574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:52.561936 containerd[1483]: time="2025-09-04T23:46:52.561398649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:52.561936 containerd[1483]: time="2025-09-04T23:46:52.561438012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:52.561936 containerd[1483]: time="2025-09-04T23:46:52.561614001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:52.625262 systemd[1]: Started cri-containerd-20e3aff890a6e7ce08cf85cea5e5aa9f621c3764834f3a0926fda7dafce0c6c8.scope - libcontainer container 20e3aff890a6e7ce08cf85cea5e5aa9f621c3764834f3a0926fda7dafce0c6c8. Sep 4 23:46:52.666259 containerd[1483]: time="2025-09-04T23:46:52.666155560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7cpqh,Uid:799d68fb-5aab-4880-8478-ee782da10c50,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9f3c20ec1889fca136d417c74d382cee91b06e210a63ebbddd689ed28459507\"" Sep 4 23:46:52.674774 containerd[1483]: time="2025-09-04T23:46:52.674534448Z" level=info msg="CreateContainer within sandbox \"c9f3c20ec1889fca136d417c74d382cee91b06e210a63ebbddd689ed28459507\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:52.708671 containerd[1483]: time="2025-09-04T23:46:52.708585424Z" level=info msg="CreateContainer within sandbox \"c9f3c20ec1889fca136d417c74d382cee91b06e210a63ebbddd689ed28459507\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee5133924396d367b478d1bc438df9d397f09d4aa89d793ca0115171d06ef679\"" Sep 4 23:46:52.711307 containerd[1483]: time="2025-09-04T23:46:52.711255567Z" level=info msg="StartContainer for \"ee5133924396d367b478d1bc438df9d397f09d4aa89d793ca0115171d06ef679\"" Sep 4 23:46:52.774194 systemd[1]: Started cri-containerd-ee5133924396d367b478d1bc438df9d397f09d4aa89d793ca0115171d06ef679.scope - libcontainer container ee5133924396d367b478d1bc438df9d397f09d4aa89d793ca0115171d06ef679. Sep 4 23:46:52.818730 containerd[1483]: time="2025-09-04T23:46:52.818489878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nlq9f,Uid:1ddcd78b-47cd-4dd6-8941-5dde374e9a89,Namespace:kube-system,Attempt:0,} returns sandbox id \"20e3aff890a6e7ce08cf85cea5e5aa9f621c3764834f3a0926fda7dafce0c6c8\"" Sep 4 23:46:52.824489 containerd[1483]: time="2025-09-04T23:46:52.824429720Z" level=info msg="CreateContainer within sandbox \"20e3aff890a6e7ce08cf85cea5e5aa9f621c3764834f3a0926fda7dafce0c6c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:52.845882 containerd[1483]: time="2025-09-04T23:46:52.845825470Z" level=info msg="CreateContainer within sandbox \"20e3aff890a6e7ce08cf85cea5e5aa9f621c3764834f3a0926fda7dafce0c6c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c042fab2ad8f4204e102dccbf7e56e74320355d63fef6406486b0ae0f3e4d60\"" Sep 4 23:46:52.847183 containerd[1483]: time="2025-09-04T23:46:52.847126262Z" level=info msg="StartContainer for \"9c042fab2ad8f4204e102dccbf7e56e74320355d63fef6406486b0ae0f3e4d60\"" Sep 4 23:46:52.897512 containerd[1483]: time="2025-09-04T23:46:52.895864925Z" level=info msg="StartContainer for \"ee5133924396d367b478d1bc438df9d397f09d4aa89d793ca0115171d06ef679\" returns successfully" Sep 4 23:46:52.921910 systemd[1]: Started cri-containerd-9c042fab2ad8f4204e102dccbf7e56e74320355d63fef6406486b0ae0f3e4d60.scope - libcontainer container 9c042fab2ad8f4204e102dccbf7e56e74320355d63fef6406486b0ae0f3e4d60. Sep 4 23:46:53.008576 containerd[1483]: time="2025-09-04T23:46:53.008432340Z" level=info msg="StartContainer for \"9c042fab2ad8f4204e102dccbf7e56e74320355d63fef6406486b0ae0f3e4d60\" returns successfully" Sep 4 23:46:53.785824 kubelet[2639]: I0904 23:46:53.782685 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7cpqh" podStartSLOduration=27.782655441 podStartE2EDuration="27.782655441s" podCreationTimestamp="2025-09-04 23:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:53.761623715 +0000 UTC m=+33.504257819" watchObservedRunningTime="2025-09-04 23:46:53.782655441 +0000 UTC m=+33.525289541" Sep 4 23:46:53.812582 kubelet[2639]: I0904 23:46:53.812496 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nlq9f" podStartSLOduration=27.812454854 podStartE2EDuration="27.812454854s" podCreationTimestamp="2025-09-04 23:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:53.808514035 +0000 UTC m=+33.551148137" watchObservedRunningTime="2025-09-04 23:46:53.812454854 +0000 UTC m=+33.555088946" Sep 4 23:47:12.764596 systemd[1]: Started sshd@7-10.128.0.91:22-139.178.68.195:46536.service - OpenSSH per-connection server daemon (139.178.68.195:46536). Sep 4 23:47:13.078960 sshd[4024]: Accepted publickey for core from 139.178.68.195 port 46536 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:13.081913 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:13.093123 systemd-logind[1462]: New session 8 of user core. Sep 4 23:47:13.099229 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:47:13.419335 sshd[4026]: Connection closed by 139.178.68.195 port 46536 Sep 4 23:47:13.421018 sshd-session[4024]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:13.427781 systemd[1]: sshd@7-10.128.0.91:22-139.178.68.195:46536.service: Deactivated successfully. Sep 4 23:47:13.431642 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:47:13.433524 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:47:13.435528 systemd-logind[1462]: Removed session 8. Sep 4 23:47:18.482419 systemd[1]: Started sshd@8-10.128.0.91:22-139.178.68.195:46542.service - OpenSSH per-connection server daemon (139.178.68.195:46542). Sep 4 23:47:18.786514 sshd[4039]: Accepted publickey for core from 139.178.68.195 port 46542 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:18.788465 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:18.796383 systemd-logind[1462]: New session 9 of user core. Sep 4 23:47:18.804173 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:47:19.087964 sshd[4041]: Connection closed by 139.178.68.195 port 46542 Sep 4 23:47:19.088968 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:19.093622 systemd[1]: sshd@8-10.128.0.91:22-139.178.68.195:46542.service: Deactivated successfully. Sep 4 23:47:19.097931 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:47:19.100456 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:47:19.102113 systemd-logind[1462]: Removed session 9. Sep 4 23:47:24.154487 systemd[1]: Started sshd@9-10.128.0.91:22-139.178.68.195:53366.service - OpenSSH per-connection server daemon (139.178.68.195:53366). Sep 4 23:47:24.461806 sshd[4057]: Accepted publickey for core from 139.178.68.195 port 53366 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:24.462868 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:24.471354 systemd-logind[1462]: New session 10 of user core. Sep 4 23:47:24.478239 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:47:24.755918 sshd[4059]: Connection closed by 139.178.68.195 port 53366 Sep 4 23:47:24.757454 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:24.762713 systemd[1]: sshd@9-10.128.0.91:22-139.178.68.195:53366.service: Deactivated successfully. Sep 4 23:47:24.765819 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:47:24.767969 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:47:24.769812 systemd-logind[1462]: Removed session 10. Sep 4 23:47:29.824870 systemd[1]: Started sshd@10-10.128.0.91:22-139.178.68.195:53378.service - OpenSSH per-connection server daemon (139.178.68.195:53378). Sep 4 23:47:30.129702 sshd[4074]: Accepted publickey for core from 139.178.68.195 port 53378 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:30.132653 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:30.141048 systemd-logind[1462]: New session 11 of user core. Sep 4 23:47:30.157353 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:47:30.449194 sshd[4076]: Connection closed by 139.178.68.195 port 53378 Sep 4 23:47:30.451291 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:30.456384 systemd[1]: sshd@10-10.128.0.91:22-139.178.68.195:53378.service: Deactivated successfully. Sep 4 23:47:30.461008 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:47:30.463586 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:47:30.466826 systemd-logind[1462]: Removed session 11. Sep 4 23:47:30.510445 systemd[1]: Started sshd@11-10.128.0.91:22-139.178.68.195:47098.service - OpenSSH per-connection server daemon (139.178.68.195:47098). Sep 4 23:47:30.822610 sshd[4089]: Accepted publickey for core from 139.178.68.195 port 47098 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:30.824745 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:30.832157 systemd-logind[1462]: New session 12 of user core. Sep 4 23:47:30.840246 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:47:31.211716 sshd[4091]: Connection closed by 139.178.68.195 port 47098 Sep 4 23:47:31.209992 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:31.219134 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:47:31.220662 systemd[1]: sshd@11-10.128.0.91:22-139.178.68.195:47098.service: Deactivated successfully. Sep 4 23:47:31.230059 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:47:31.231998 systemd-logind[1462]: Removed session 12. Sep 4 23:47:31.274479 systemd[1]: Started sshd@12-10.128.0.91:22-139.178.68.195:47104.service - OpenSSH per-connection server daemon (139.178.68.195:47104). Sep 4 23:47:31.589747 sshd[4101]: Accepted publickey for core from 139.178.68.195 port 47104 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:31.591866 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:31.599463 systemd-logind[1462]: New session 13 of user core. Sep 4 23:47:31.608497 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:47:31.908909 sshd[4103]: Connection closed by 139.178.68.195 port 47104 Sep 4 23:47:31.910516 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:31.916305 systemd[1]: sshd@12-10.128.0.91:22-139.178.68.195:47104.service: Deactivated successfully. Sep 4 23:47:31.920753 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:47:31.923888 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:47:31.926017 systemd-logind[1462]: Removed session 13. Sep 4 23:47:36.970439 systemd[1]: Started sshd@13-10.128.0.91:22-139.178.68.195:47118.service - OpenSSH per-connection server daemon (139.178.68.195:47118). Sep 4 23:47:37.276860 sshd[4116]: Accepted publickey for core from 139.178.68.195 port 47118 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:37.279012 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:37.285823 systemd-logind[1462]: New session 14 of user core. Sep 4 23:47:37.294233 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:47:37.585802 sshd[4118]: Connection closed by 139.178.68.195 port 47118 Sep 4 23:47:37.587029 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:37.593508 systemd[1]: sshd@13-10.128.0.91:22-139.178.68.195:47118.service: Deactivated successfully. Sep 4 23:47:37.597696 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:47:37.600136 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:47:37.602025 systemd-logind[1462]: Removed session 14. Sep 4 23:47:42.643397 systemd[1]: Started sshd@14-10.128.0.91:22-139.178.68.195:60930.service - OpenSSH per-connection server daemon (139.178.68.195:60930). Sep 4 23:47:42.946204 sshd[4130]: Accepted publickey for core from 139.178.68.195 port 60930 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:42.948537 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:42.955634 systemd-logind[1462]: New session 15 of user core. Sep 4 23:47:42.967296 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:47:43.268035 sshd[4132]: Connection closed by 139.178.68.195 port 60930 Sep 4 23:47:43.269393 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:43.276337 systemd[1]: sshd@14-10.128.0.91:22-139.178.68.195:60930.service: Deactivated successfully. Sep 4 23:47:43.280079 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:47:43.282060 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:47:43.283718 systemd-logind[1462]: Removed session 15. Sep 4 23:47:48.329523 systemd[1]: Started sshd@15-10.128.0.91:22-139.178.68.195:60946.service - OpenSSH per-connection server daemon (139.178.68.195:60946). Sep 4 23:47:48.641989 sshd[4144]: Accepted publickey for core from 139.178.68.195 port 60946 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:48.644361 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:48.651147 systemd-logind[1462]: New session 16 of user core. Sep 4 23:47:48.657227 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:47:48.958349 sshd[4146]: Connection closed by 139.178.68.195 port 60946 Sep 4 23:47:48.959908 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:48.966133 systemd[1]: sshd@15-10.128.0.91:22-139.178.68.195:60946.service: Deactivated successfully. Sep 4 23:47:48.969469 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:47:48.971268 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:47:48.973000 systemd-logind[1462]: Removed session 16. Sep 4 23:47:49.023484 systemd[1]: Started sshd@16-10.128.0.91:22-139.178.68.195:60960.service - OpenSSH per-connection server daemon (139.178.68.195:60960). Sep 4 23:47:49.331980 sshd[4158]: Accepted publickey for core from 139.178.68.195 port 60960 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:49.334231 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:49.341326 systemd-logind[1462]: New session 17 of user core. Sep 4 23:47:49.348270 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:47:49.706064 sshd[4160]: Connection closed by 139.178.68.195 port 60960 Sep 4 23:47:49.707630 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:49.714172 systemd[1]: sshd@16-10.128.0.91:22-139.178.68.195:60960.service: Deactivated successfully. Sep 4 23:47:49.717763 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:47:49.719663 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:47:49.721445 systemd-logind[1462]: Removed session 17. Sep 4 23:47:49.765518 systemd[1]: Started sshd@17-10.128.0.91:22-139.178.68.195:60964.service - OpenSSH per-connection server daemon (139.178.68.195:60964). Sep 4 23:47:50.069991 sshd[4170]: Accepted publickey for core from 139.178.68.195 port 60964 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:50.072004 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:50.079295 systemd-logind[1462]: New session 18 of user core. Sep 4 23:47:50.086287 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:47:51.055161 sshd[4172]: Connection closed by 139.178.68.195 port 60964 Sep 4 23:47:51.057134 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:51.072210 systemd[1]: sshd@17-10.128.0.91:22-139.178.68.195:60964.service: Deactivated successfully. Sep 4 23:47:51.081335 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:47:51.086242 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:47:51.096096 systemd-logind[1462]: Removed session 18. Sep 4 23:47:51.126355 systemd[1]: Started sshd@18-10.128.0.91:22-139.178.68.195:42168.service - OpenSSH per-connection server daemon (139.178.68.195:42168). Sep 4 23:47:51.432144 sshd[4189]: Accepted publickey for core from 139.178.68.195 port 42168 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:51.436026 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:51.444386 systemd-logind[1462]: New session 19 of user core. Sep 4 23:47:51.452222 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:47:51.856726 sshd[4191]: Connection closed by 139.178.68.195 port 42168 Sep 4 23:47:51.859164 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:51.863836 systemd[1]: sshd@18-10.128.0.91:22-139.178.68.195:42168.service: Deactivated successfully. Sep 4 23:47:51.869540 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:47:51.874912 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:47:51.876892 systemd-logind[1462]: Removed session 19. Sep 4 23:47:51.916309 systemd[1]: Started sshd@19-10.128.0.91:22-139.178.68.195:42172.service - OpenSSH per-connection server daemon (139.178.68.195:42172). Sep 4 23:47:52.228080 sshd[4201]: Accepted publickey for core from 139.178.68.195 port 42172 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:52.228594 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:52.235339 systemd-logind[1462]: New session 20 of user core. Sep 4 23:47:52.240225 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:47:52.537047 sshd[4203]: Connection closed by 139.178.68.195 port 42172 Sep 4 23:47:52.538688 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:52.544379 systemd[1]: sshd@19-10.128.0.91:22-139.178.68.195:42172.service: Deactivated successfully. Sep 4 23:47:52.548322 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:47:52.553079 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:47:52.557038 systemd-logind[1462]: Removed session 20. Sep 4 23:47:57.593759 systemd[1]: Started sshd@20-10.128.0.91:22-139.178.68.195:42178.service - OpenSSH per-connection server daemon (139.178.68.195:42178). Sep 4 23:47:57.918658 sshd[4218]: Accepted publickey for core from 139.178.68.195 port 42178 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:47:57.920762 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:57.930066 systemd-logind[1462]: New session 21 of user core. Sep 4 23:47:57.939387 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:47:58.243966 sshd[4220]: Connection closed by 139.178.68.195 port 42178 Sep 4 23:47:58.245570 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:58.251495 systemd[1]: sshd@20-10.128.0.91:22-139.178.68.195:42178.service: Deactivated successfully. Sep 4 23:47:58.255674 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:47:58.258891 systemd-logind[1462]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:47:58.262085 systemd-logind[1462]: Removed session 21. Sep 4 23:48:03.310437 systemd[1]: Started sshd@21-10.128.0.91:22-139.178.68.195:49852.service - OpenSSH per-connection server daemon (139.178.68.195:49852). Sep 4 23:48:03.609939 sshd[4233]: Accepted publickey for core from 139.178.68.195 port 49852 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:48:03.612303 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:03.619533 systemd-logind[1462]: New session 22 of user core. Sep 4 23:48:03.629298 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:48:03.915501 sshd[4235]: Connection closed by 139.178.68.195 port 49852 Sep 4 23:48:03.917228 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:03.923562 systemd[1]: sshd@21-10.128.0.91:22-139.178.68.195:49852.service: Deactivated successfully. Sep 4 23:48:03.927315 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:48:03.929341 systemd-logind[1462]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:48:03.931639 systemd-logind[1462]: Removed session 22. Sep 4 23:48:08.978675 systemd[1]: Started sshd@22-10.128.0.91:22-139.178.68.195:49858.service - OpenSSH per-connection server daemon (139.178.68.195:49858). Sep 4 23:48:09.290111 sshd[4248]: Accepted publickey for core from 139.178.68.195 port 49858 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:48:09.291977 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:09.299808 systemd-logind[1462]: New session 23 of user core. Sep 4 23:48:09.307323 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:48:09.601672 sshd[4250]: Connection closed by 139.178.68.195 port 49858 Sep 4 23:48:09.603001 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:09.609464 systemd[1]: sshd@22-10.128.0.91:22-139.178.68.195:49858.service: Deactivated successfully. Sep 4 23:48:09.614469 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:48:09.616085 systemd-logind[1462]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:48:09.617726 systemd-logind[1462]: Removed session 23. Sep 4 23:48:09.664550 systemd[1]: Started sshd@23-10.128.0.91:22-139.178.68.195:49860.service - OpenSSH per-connection server daemon (139.178.68.195:49860). Sep 4 23:48:09.965690 sshd[4262]: Accepted publickey for core from 139.178.68.195 port 49860 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:48:09.967381 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:09.974578 systemd-logind[1462]: New session 24 of user core. Sep 4 23:48:09.982424 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:48:11.959587 containerd[1483]: time="2025-09-04T23:48:11.959422143Z" level=info msg="StopContainer for \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\" with timeout 30 (s)" Sep 4 23:48:11.963189 containerd[1483]: time="2025-09-04T23:48:11.963123704Z" level=info msg="Stop container \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\" with signal terminated" Sep 4 23:48:12.008133 systemd[1]: cri-containerd-51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab.scope: Deactivated successfully. Sep 4 23:48:12.029796 containerd[1483]: time="2025-09-04T23:48:12.029620650Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:48:12.045954 containerd[1483]: time="2025-09-04T23:48:12.044688573Z" level=info msg="StopContainer for \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\" with timeout 2 (s)" Sep 4 23:48:12.046413 containerd[1483]: time="2025-09-04T23:48:12.046352118Z" level=info msg="Stop container \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\" with signal terminated" Sep 4 23:48:12.077416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab-rootfs.mount: Deactivated successfully. Sep 4 23:48:12.089168 systemd-networkd[1384]: lxc_health: Link DOWN Sep 4 23:48:12.089187 systemd-networkd[1384]: lxc_health: Lost carrier Sep 4 23:48:12.102243 containerd[1483]: time="2025-09-04T23:48:12.102100981Z" level=info msg="shim disconnected" id=51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab namespace=k8s.io Sep 4 23:48:12.102243 containerd[1483]: time="2025-09-04T23:48:12.102247390Z" level=warning msg="cleaning up after shim disconnected" id=51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab namespace=k8s.io Sep 4 23:48:12.102769 containerd[1483]: time="2025-09-04T23:48:12.102266227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:12.123321 systemd[1]: cri-containerd-618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a.scope: Deactivated successfully. Sep 4 23:48:12.123859 systemd[1]: cri-containerd-618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a.scope: Consumed 10.631s CPU time, 124.9M memory peak, 144K read from disk, 13.3M written to disk. Sep 4 23:48:12.165544 containerd[1483]: time="2025-09-04T23:48:12.165470395Z" level=info msg="StopContainer for \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\" returns successfully" Sep 4 23:48:12.166716 containerd[1483]: time="2025-09-04T23:48:12.166656299Z" level=info msg="StopPodSandbox for \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\"" Sep 4 23:48:12.167179 containerd[1483]: time="2025-09-04T23:48:12.166727413Z" level=info msg="Container to stop \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:12.172089 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071-shm.mount: Deactivated successfully. Sep 4 23:48:12.195043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a-rootfs.mount: Deactivated successfully. Sep 4 23:48:12.200741 containerd[1483]: time="2025-09-04T23:48:12.199610569Z" level=info msg="shim disconnected" id=618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a namespace=k8s.io Sep 4 23:48:12.200741 containerd[1483]: time="2025-09-04T23:48:12.199716438Z" level=warning msg="cleaning up after shim disconnected" id=618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a namespace=k8s.io Sep 4 23:48:12.200741 containerd[1483]: time="2025-09-04T23:48:12.199732827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:12.200290 systemd[1]: cri-containerd-b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071.scope: Deactivated successfully. Sep 4 23:48:12.250831 containerd[1483]: time="2025-09-04T23:48:12.247743557Z" level=info msg="StopContainer for \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\" returns successfully" Sep 4 23:48:12.250831 containerd[1483]: time="2025-09-04T23:48:12.249309778Z" level=info msg="StopPodSandbox for \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\"" Sep 4 23:48:12.253254 containerd[1483]: time="2025-09-04T23:48:12.249478203Z" level=info msg="Container to stop \"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:12.253254 containerd[1483]: time="2025-09-04T23:48:12.252228398Z" level=info msg="Container to stop \"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:12.253254 containerd[1483]: time="2025-09-04T23:48:12.253169015Z" level=info msg="Container to stop \"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:12.253866 containerd[1483]: time="2025-09-04T23:48:12.253424994Z" level=info msg="Container to stop \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:12.253866 containerd[1483]: time="2025-09-04T23:48:12.253456001Z" level=info msg="Container to stop \"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:12.266915 containerd[1483]: time="2025-09-04T23:48:12.266808627Z" level=info msg="shim disconnected" id=b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071 namespace=k8s.io Sep 4 23:48:12.267303 containerd[1483]: time="2025-09-04T23:48:12.266915259Z" level=warning msg="cleaning up after shim disconnected" id=b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071 namespace=k8s.io Sep 4 23:48:12.267303 containerd[1483]: time="2025-09-04T23:48:12.266960999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:12.273569 systemd[1]: cri-containerd-175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3.scope: Deactivated successfully. Sep 4 23:48:12.309315 containerd[1483]: time="2025-09-04T23:48:12.309240791Z" level=info msg="TearDown network for sandbox \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\" successfully" Sep 4 23:48:12.309315 containerd[1483]: time="2025-09-04T23:48:12.309298931Z" level=info msg="StopPodSandbox for \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\" returns successfully" Sep 4 23:48:12.342209 containerd[1483]: time="2025-09-04T23:48:12.342059839Z" level=info msg="shim disconnected" id=175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3 namespace=k8s.io Sep 4 23:48:12.344547 containerd[1483]: time="2025-09-04T23:48:12.343984974Z" level=warning msg="cleaning up after shim disconnected" id=175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3 namespace=k8s.io Sep 4 23:48:12.344547 containerd[1483]: time="2025-09-04T23:48:12.344165447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:12.379114 containerd[1483]: time="2025-09-04T23:48:12.379045287Z" level=info msg="TearDown network for sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" successfully" Sep 4 23:48:12.379432 containerd[1483]: time="2025-09-04T23:48:12.379375393Z" level=info msg="StopPodSandbox for \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" returns successfully" Sep 4 23:48:12.499972 kubelet[2639]: I0904 23:48:12.497823 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-run\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.499972 kubelet[2639]: I0904 23:48:12.497916 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-xtables-lock\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.499972 kubelet[2639]: I0904 23:48:12.497995 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-etc-cni-netd\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.499972 kubelet[2639]: I0904 23:48:12.498021 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-lib-modules\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.499972 kubelet[2639]: I0904 23:48:12.498037 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:12.499972 kubelet[2639]: I0904 23:48:12.498061 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-config-path\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.502042 kubelet[2639]: I0904 23:48:12.498099 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f2706c7-b85e-44ec-8208-608cb4a3da92-cilium-config-path\") pod \"8f2706c7-b85e-44ec-8208-608cb4a3da92\" (UID: \"8f2706c7-b85e-44ec-8208-608cb4a3da92\") " Sep 4 23:48:12.502042 kubelet[2639]: I0904 23:48:12.498125 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:12.502042 kubelet[2639]: I0904 23:48:12.498134 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bc80af6-e3eb-49be-95e4-f1dc275b5747-clustermesh-secrets\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.502042 kubelet[2639]: I0904 23:48:12.498154 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:12.502042 kubelet[2639]: I0904 23:48:12.498169 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw4pv\" (UniqueName: \"kubernetes.io/projected/8f2706c7-b85e-44ec-8208-608cb4a3da92-kube-api-access-fw4pv\") pod \"8f2706c7-b85e-44ec-8208-608cb4a3da92\" (UID: \"8f2706c7-b85e-44ec-8208-608cb4a3da92\") " Sep 4 23:48:12.502408 kubelet[2639]: I0904 23:48:12.498207 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-host-proc-sys-kernel\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.502408 kubelet[2639]: I0904 23:48:12.498264 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-bpf-maps\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.502408 kubelet[2639]: I0904 23:48:12.498292 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-hostproc\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.502408 kubelet[2639]: I0904 23:48:12.498319 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cni-path\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.502408 kubelet[2639]: I0904 23:48:12.498350 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-cgroup\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.502408 kubelet[2639]: I0904 23:48:12.498376 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-host-proc-sys-net\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.502705 kubelet[2639]: I0904 23:48:12.498409 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djc46\" (UniqueName: \"kubernetes.io/projected/3bc80af6-e3eb-49be-95e4-f1dc275b5747-kube-api-access-djc46\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.502705 kubelet[2639]: I0904 23:48:12.498445 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bc80af6-e3eb-49be-95e4-f1dc275b5747-hubble-tls\") pod \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\" (UID: \"3bc80af6-e3eb-49be-95e4-f1dc275b5747\") " Sep 4 23:48:12.502705 kubelet[2639]: I0904 23:48:12.498526 2639 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-xtables-lock\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.502705 kubelet[2639]: I0904 23:48:12.498560 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-run\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.502705 kubelet[2639]: I0904 23:48:12.498578 2639 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-etc-cni-netd\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.506072 kubelet[2639]: I0904 23:48:12.505902 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:48:12.506420 kubelet[2639]: I0904 23:48:12.506390 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:12.506727 kubelet[2639]: I0904 23:48:12.506683 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc80af6-e3eb-49be-95e4-f1dc275b5747-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:48:12.509227 kubelet[2639]: I0904 23:48:12.506854 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-hostproc" (OuterVolumeSpecName: "hostproc") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:12.509642 kubelet[2639]: I0904 23:48:12.507517 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cni-path" (OuterVolumeSpecName: "cni-path") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:12.509642 kubelet[2639]: I0904 23:48:12.507572 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:12.509642 kubelet[2639]: I0904 23:48:12.507600 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:12.509642 kubelet[2639]: I0904 23:48:12.508714 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:12.509642 kubelet[2639]: I0904 23:48:12.508765 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:48:12.515281 kubelet[2639]: I0904 23:48:12.515145 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc80af6-e3eb-49be-95e4-f1dc275b5747-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:48:12.516174 kubelet[2639]: I0904 23:48:12.516063 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f2706c7-b85e-44ec-8208-608cb4a3da92-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8f2706c7-b85e-44ec-8208-608cb4a3da92" (UID: "8f2706c7-b85e-44ec-8208-608cb4a3da92"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:48:12.517837 kubelet[2639]: I0904 23:48:12.517759 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc80af6-e3eb-49be-95e4-f1dc275b5747-kube-api-access-djc46" (OuterVolumeSpecName: "kube-api-access-djc46") pod "3bc80af6-e3eb-49be-95e4-f1dc275b5747" (UID: "3bc80af6-e3eb-49be-95e4-f1dc275b5747"). InnerVolumeSpecName "kube-api-access-djc46". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:48:12.518224 kubelet[2639]: I0904 23:48:12.518172 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2706c7-b85e-44ec-8208-608cb4a3da92-kube-api-access-fw4pv" (OuterVolumeSpecName: "kube-api-access-fw4pv") pod "8f2706c7-b85e-44ec-8208-608cb4a3da92" (UID: "8f2706c7-b85e-44ec-8208-608cb4a3da92"). InnerVolumeSpecName "kube-api-access-fw4pv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:48:12.599821 kubelet[2639]: I0904 23:48:12.599731 2639 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-lib-modules\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.599821 kubelet[2639]: I0904 23:48:12.599795 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-config-path\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.599821 kubelet[2639]: I0904 23:48:12.599817 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f2706c7-b85e-44ec-8208-608cb4a3da92-cilium-config-path\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.599821 kubelet[2639]: I0904 23:48:12.599834 2639 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bc80af6-e3eb-49be-95e4-f1dc275b5747-clustermesh-secrets\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.600266 kubelet[2639]: I0904 23:48:12.599853 2639 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fw4pv\" (UniqueName: \"kubernetes.io/projected/8f2706c7-b85e-44ec-8208-608cb4a3da92-kube-api-access-fw4pv\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.600266 kubelet[2639]: I0904 23:48:12.599871 2639 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-host-proc-sys-kernel\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.600266 kubelet[2639]: I0904 23:48:12.599916 2639 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-bpf-maps\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.600266 kubelet[2639]: I0904 23:48:12.599975 2639 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-hostproc\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.600266 kubelet[2639]: I0904 23:48:12.599991 2639 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cni-path\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.600266 kubelet[2639]: I0904 23:48:12.600007 2639 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bc80af6-e3eb-49be-95e4-f1dc275b5747-hubble-tls\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.600266 kubelet[2639]: I0904 23:48:12.600022 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-cilium-cgroup\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.600476 kubelet[2639]: I0904 23:48:12.600048 2639 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bc80af6-e3eb-49be-95e4-f1dc275b5747-host-proc-sys-net\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.600476 kubelet[2639]: I0904 23:48:12.600065 2639 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-djc46\" (UniqueName: \"kubernetes.io/projected/3bc80af6-e3eb-49be-95e4-f1dc275b5747-kube-api-access-djc46\") on node \"ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" DevicePath \"\"" Sep 4 23:48:12.972813 kubelet[2639]: I0904 23:48:12.972187 2639 scope.go:117] "RemoveContainer" containerID="51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab" Sep 4 23:48:12.977425 containerd[1483]: time="2025-09-04T23:48:12.976215682Z" level=info msg="RemoveContainer for \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\"" Sep 4 23:48:12.991670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071-rootfs.mount: Deactivated successfully. Sep 4 23:48:12.991875 systemd[1]: var-lib-kubelet-pods-8f2706c7\x2db85e\x2d44ec\x2d8208\x2d608cb4a3da92-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfw4pv.mount: Deactivated successfully. Sep 4 23:48:12.992029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3-rootfs.mount: Deactivated successfully. Sep 4 23:48:12.992145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3-shm.mount: Deactivated successfully. Sep 4 23:48:12.992267 systemd[1]: var-lib-kubelet-pods-3bc80af6\x2de3eb\x2d49be\x2d95e4\x2df1dc275b5747-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddjc46.mount: Deactivated successfully. Sep 4 23:48:12.992380 systemd[1]: var-lib-kubelet-pods-3bc80af6\x2de3eb\x2d49be\x2d95e4\x2df1dc275b5747-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:48:12.992485 systemd[1]: var-lib-kubelet-pods-3bc80af6\x2de3eb\x2d49be\x2d95e4\x2df1dc275b5747-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:48:13.002716 systemd[1]: Removed slice kubepods-besteffort-pod8f2706c7_b85e_44ec_8208_608cb4a3da92.slice - libcontainer container kubepods-besteffort-pod8f2706c7_b85e_44ec_8208_608cb4a3da92.slice. Sep 4 23:48:13.015756 containerd[1483]: time="2025-09-04T23:48:13.015657948Z" level=info msg="RemoveContainer for \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\" returns successfully" Sep 4 23:48:13.019309 kubelet[2639]: I0904 23:48:13.019246 2639 scope.go:117] "RemoveContainer" containerID="51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab" Sep 4 23:48:13.020193 containerd[1483]: time="2025-09-04T23:48:13.020115168Z" level=error msg="ContainerStatus for \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\": not found" Sep 4 23:48:13.020843 kubelet[2639]: E0904 23:48:13.020380 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\": not found" containerID="51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab" Sep 4 23:48:13.020843 kubelet[2639]: I0904 23:48:13.020433 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab"} err="failed to get container status \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"51b72eef288deac26f656a8edb4b01d2488ff5b29d3ba345371c9c0fc11674ab\": not found" Sep 4 23:48:13.020843 kubelet[2639]: I0904 23:48:13.020575 2639 scope.go:117] "RemoveContainer" containerID="618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a" Sep 4 23:48:13.025725 containerd[1483]: time="2025-09-04T23:48:13.025125531Z" level=info msg="RemoveContainer for \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\"" Sep 4 23:48:13.031617 systemd[1]: Removed slice kubepods-burstable-pod3bc80af6_e3eb_49be_95e4_f1dc275b5747.slice - libcontainer container kubepods-burstable-pod3bc80af6_e3eb_49be_95e4_f1dc275b5747.slice. Sep 4 23:48:13.031820 systemd[1]: kubepods-burstable-pod3bc80af6_e3eb_49be_95e4_f1dc275b5747.slice: Consumed 10.770s CPU time, 125.3M memory peak, 144K read from disk, 13.3M written to disk. Sep 4 23:48:13.038478 containerd[1483]: time="2025-09-04T23:48:13.038206103Z" level=info msg="RemoveContainer for \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\" returns successfully" Sep 4 23:48:13.038699 kubelet[2639]: I0904 23:48:13.038530 2639 scope.go:117] "RemoveContainer" containerID="27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45" Sep 4 23:48:13.042631 containerd[1483]: time="2025-09-04T23:48:13.042491686Z" level=info msg="RemoveContainer for \"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45\"" Sep 4 23:48:13.049732 containerd[1483]: time="2025-09-04T23:48:13.049471968Z" level=info msg="RemoveContainer for \"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45\" returns successfully" Sep 4 23:48:13.052980 kubelet[2639]: I0904 23:48:13.052703 2639 scope.go:117] "RemoveContainer" containerID="1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d" Sep 4 23:48:13.056025 containerd[1483]: time="2025-09-04T23:48:13.055935501Z" level=info msg="RemoveContainer for \"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d\"" Sep 4 23:48:13.066533 containerd[1483]: time="2025-09-04T23:48:13.066461350Z" level=info msg="RemoveContainer for \"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d\" returns successfully" Sep 4 23:48:13.067285 kubelet[2639]: I0904 23:48:13.066871 2639 scope.go:117] "RemoveContainer" containerID="81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901" Sep 4 23:48:13.077630 containerd[1483]: time="2025-09-04T23:48:13.076769334Z" level=info msg="RemoveContainer for \"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901\"" Sep 4 23:48:13.085677 containerd[1483]: time="2025-09-04T23:48:13.085430513Z" level=info msg="RemoveContainer for \"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901\" returns successfully" Sep 4 23:48:13.088409 kubelet[2639]: I0904 23:48:13.088365 2639 scope.go:117] "RemoveContainer" containerID="153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb" Sep 4 23:48:13.091454 containerd[1483]: time="2025-09-04T23:48:13.090940164Z" level=info msg="RemoveContainer for \"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb\"" Sep 4 23:48:13.103226 containerd[1483]: time="2025-09-04T23:48:13.103053296Z" level=info msg="RemoveContainer for \"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb\" returns successfully" Sep 4 23:48:13.103867 kubelet[2639]: I0904 23:48:13.103538 2639 scope.go:117] "RemoveContainer" containerID="618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a" Sep 4 23:48:13.104828 containerd[1483]: time="2025-09-04T23:48:13.104332069Z" level=error msg="ContainerStatus for \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\": not found" Sep 4 23:48:13.104991 kubelet[2639]: E0904 23:48:13.104612 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\": not found" containerID="618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a" Sep 4 23:48:13.104991 kubelet[2639]: I0904 23:48:13.104673 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a"} err="failed to get container status \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\": rpc error: code = NotFound desc = an error occurred when try to find container \"618c40a1861921817c6087f480ae4431db5befcfc3854866857e7f16a83d322a\": not found" Sep 4 23:48:13.104991 kubelet[2639]: I0904 23:48:13.104719 2639 scope.go:117] "RemoveContainer" containerID="27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45" Sep 4 23:48:13.105184 containerd[1483]: time="2025-09-04T23:48:13.105062496Z" level=error msg="ContainerStatus for \"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45\": not found" Sep 4 23:48:13.105255 kubelet[2639]: E0904 23:48:13.105234 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45\": not found" containerID="27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45" Sep 4 23:48:13.105312 kubelet[2639]: I0904 23:48:13.105270 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45"} err="failed to get container status \"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45\": rpc error: code = NotFound desc = an error occurred when try to find container \"27e3b01067e554746d4d081b360d15af03ec10589d0ad797cf01e94ec68f2e45\": not found" Sep 4 23:48:13.105312 kubelet[2639]: I0904 23:48:13.105303 2639 scope.go:117] "RemoveContainer" containerID="1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d" Sep 4 23:48:13.105597 containerd[1483]: time="2025-09-04T23:48:13.105534361Z" level=error msg="ContainerStatus for \"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d\": not found" Sep 4 23:48:13.105792 kubelet[2639]: E0904 23:48:13.105761 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d\": not found" containerID="1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d" Sep 4 23:48:13.105874 kubelet[2639]: I0904 23:48:13.105806 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d"} err="failed to get container status \"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d\": rpc error: code = NotFound desc = an error occurred when try to find container \"1542d629714745da67c87ac6236ed9af3caccafd68803d6ff2246ac82da2e00d\": not found" Sep 4 23:48:13.106134 kubelet[2639]: I0904 23:48:13.105842 2639 scope.go:117] "RemoveContainer" containerID="81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901" Sep 4 23:48:13.106794 containerd[1483]: time="2025-09-04T23:48:13.106508500Z" level=error msg="ContainerStatus for \"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901\": not found" Sep 4 23:48:13.107079 kubelet[2639]: E0904 23:48:13.106726 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901\": not found" containerID="81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901" Sep 4 23:48:13.107079 kubelet[2639]: I0904 23:48:13.106762 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901"} err="failed to get container status \"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901\": rpc error: code = NotFound desc = an error occurred when try to find container \"81b7447f0c6d2c822fa747a20ca2542d40c22756a7dc936c2d2ed2fdb80ba901\": not found" Sep 4 23:48:13.107680 kubelet[2639]: I0904 23:48:13.106912 2639 scope.go:117] "RemoveContainer" containerID="153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb" Sep 4 23:48:13.108261 containerd[1483]: time="2025-09-04T23:48:13.107573354Z" level=error msg="ContainerStatus for \"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb\": not found" Sep 4 23:48:13.109046 kubelet[2639]: E0904 23:48:13.109013 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb\": not found" containerID="153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb" Sep 4 23:48:13.109236 kubelet[2639]: I0904 23:48:13.109205 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb"} err="failed to get container status \"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"153b6f9d92874e73d4cbc0548f838a545973091964a5758f20f967071bc978cb\": not found" Sep 4 23:48:13.898725 sshd[4264]: Connection closed by 139.178.68.195 port 49860 Sep 4 23:48:13.899970 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:13.907096 systemd-logind[1462]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:48:13.908392 systemd[1]: sshd@23-10.128.0.91:22-139.178.68.195:49860.service: Deactivated successfully. Sep 4 23:48:13.912285 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:48:13.913101 systemd[1]: session-24.scope: Consumed 1.175s CPU time, 23.8M memory peak. Sep 4 23:48:13.914684 systemd-logind[1462]: Removed session 24. Sep 4 23:48:13.976565 systemd[1]: Started sshd@24-10.128.0.91:22-139.178.68.195:41720.service - OpenSSH per-connection server daemon (139.178.68.195:41720). Sep 4 23:48:14.337996 sshd[4428]: Accepted publickey for core from 139.178.68.195 port 41720 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:48:14.340631 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:14.350016 systemd-logind[1462]: New session 25 of user core. Sep 4 23:48:14.355303 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:48:14.446012 kubelet[2639]: I0904 23:48:14.445454 2639 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bc80af6-e3eb-49be-95e4-f1dc275b5747" path="/var/lib/kubelet/pods/3bc80af6-e3eb-49be-95e4-f1dc275b5747/volumes" Sep 4 23:48:14.448556 kubelet[2639]: I0904 23:48:14.448049 2639 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f2706c7-b85e-44ec-8208-608cb4a3da92" path="/var/lib/kubelet/pods/8f2706c7-b85e-44ec-8208-608cb4a3da92/volumes" Sep 4 23:48:15.016831 ntpd[1444]: Deleting interface #12 lxc_health, fe80::1cc6:caff:fe39:3fa1%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Sep 4 23:48:15.018054 ntpd[1444]: 4 Sep 23:48:15 ntpd[1444]: Deleting interface #12 lxc_health, fe80::1cc6:caff:fe39:3fa1%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Sep 4 23:48:15.675127 kubelet[2639]: E0904 23:48:15.674965 2639 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:48:15.677078 kubelet[2639]: I0904 23:48:15.676035 2639 memory_manager.go:355] "RemoveStaleState removing state" podUID="3bc80af6-e3eb-49be-95e4-f1dc275b5747" containerName="cilium-agent" Sep 4 23:48:15.677078 kubelet[2639]: I0904 23:48:15.676098 2639 memory_manager.go:355] "RemoveStaleState removing state" podUID="8f2706c7-b85e-44ec-8208-608cb4a3da92" containerName="cilium-operator" Sep 4 23:48:15.692950 kubelet[2639]: I0904 23:48:15.690080 2639 status_manager.go:890] "Failed to get status for pod" podUID="e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2" pod="kube-system/cilium-w8z8p" err="pods \"cilium-w8z8p\" is forbidden: User \"system:node:ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-2-nightly-20250904-2100-8def366c1a3911ef6699' and this object" Sep 4 23:48:15.693129 sshd[4430]: Connection closed by 139.178.68.195 port 41720 Sep 4 23:48:15.695274 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:15.696328 systemd[1]: Created slice kubepods-burstable-pode75b9f11_e7cd_43cc_bfec_ada7e5c74ed2.slice - libcontainer container kubepods-burstable-pode75b9f11_e7cd_43cc_bfec_ada7e5c74ed2.slice. Sep 4 23:48:15.713158 systemd[1]: sshd@24-10.128.0.91:22-139.178.68.195:41720.service: Deactivated successfully. Sep 4 23:48:15.719515 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:48:15.720156 systemd[1]: session-25.scope: Consumed 1.076s CPU time, 23.9M memory peak. Sep 4 23:48:15.721982 systemd-logind[1462]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:48:15.743192 systemd-logind[1462]: Removed session 25. Sep 4 23:48:15.753676 systemd[1]: Started sshd@25-10.128.0.91:22-139.178.68.195:41732.service - OpenSSH per-connection server daemon (139.178.68.195:41732). Sep 4 23:48:15.820546 kubelet[2639]: I0904 23:48:15.820424 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-xtables-lock\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.820546 kubelet[2639]: I0904 23:48:15.820492 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-host-proc-sys-kernel\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.820546 kubelet[2639]: I0904 23:48:15.820525 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-hostproc\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.820546 kubelet[2639]: I0904 23:48:15.820552 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-host-proc-sys-net\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.821153 kubelet[2639]: I0904 23:48:15.820580 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-cilium-config-path\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.821153 kubelet[2639]: I0904 23:48:15.820609 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-cni-path\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.821153 kubelet[2639]: I0904 23:48:15.820638 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-clustermesh-secrets\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.821153 kubelet[2639]: I0904 23:48:15.820665 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-hubble-tls\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.821153 kubelet[2639]: I0904 23:48:15.820693 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj8tc\" (UniqueName: \"kubernetes.io/projected/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-kube-api-access-vj8tc\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.821153 kubelet[2639]: I0904 23:48:15.820723 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-etc-cni-netd\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.821510 kubelet[2639]: I0904 23:48:15.820746 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-lib-modules\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.821510 kubelet[2639]: I0904 23:48:15.820772 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-cilium-run\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.821510 kubelet[2639]: I0904 23:48:15.820800 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-bpf-maps\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.821510 kubelet[2639]: I0904 23:48:15.820831 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-cilium-cgroup\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:15.821510 kubelet[2639]: I0904 23:48:15.820905 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2-cilium-ipsec-secrets\") pod \"cilium-w8z8p\" (UID: \"e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2\") " pod="kube-system/cilium-w8z8p" Sep 4 23:48:16.007624 containerd[1483]: time="2025-09-04T23:48:16.007370413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w8z8p,Uid:e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:16.044317 containerd[1483]: time="2025-09-04T23:48:16.044193222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:48:16.044655 containerd[1483]: time="2025-09-04T23:48:16.044293035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:48:16.044655 containerd[1483]: time="2025-09-04T23:48:16.044321206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:16.045092 containerd[1483]: time="2025-09-04T23:48:16.044451282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:16.063976 sshd[4440]: Accepted publickey for core from 139.178.68.195 port 41732 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:48:16.068266 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:16.088210 systemd[1]: Started cri-containerd-97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053.scope - libcontainer container 97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053. Sep 4 23:48:16.096353 systemd-logind[1462]: New session 26 of user core. Sep 4 23:48:16.096894 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:48:16.136158 containerd[1483]: time="2025-09-04T23:48:16.136097152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w8z8p,Uid:e75b9f11-e7cd-43cc-bfec-ada7e5c74ed2,Namespace:kube-system,Attempt:0,} returns sandbox id \"97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053\"" Sep 4 23:48:16.141470 containerd[1483]: time="2025-09-04T23:48:16.141165626Z" level=info msg="CreateContainer within sandbox \"97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:48:16.157080 containerd[1483]: time="2025-09-04T23:48:16.157007767Z" level=info msg="CreateContainer within sandbox \"97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"19891e8412349fc4d84d48df50c1008158bebc7881b026039e353a42cbfdedf0\"" Sep 4 23:48:16.158204 containerd[1483]: time="2025-09-04T23:48:16.158096487Z" level=info msg="StartContainer for \"19891e8412349fc4d84d48df50c1008158bebc7881b026039e353a42cbfdedf0\"" Sep 4 23:48:16.199248 systemd[1]: Started cri-containerd-19891e8412349fc4d84d48df50c1008158bebc7881b026039e353a42cbfdedf0.scope - libcontainer container 19891e8412349fc4d84d48df50c1008158bebc7881b026039e353a42cbfdedf0. Sep 4 23:48:16.245056 containerd[1483]: time="2025-09-04T23:48:16.244999404Z" level=info msg="StartContainer for \"19891e8412349fc4d84d48df50c1008158bebc7881b026039e353a42cbfdedf0\" returns successfully" Sep 4 23:48:16.259099 systemd[1]: cri-containerd-19891e8412349fc4d84d48df50c1008158bebc7881b026039e353a42cbfdedf0.scope: Deactivated successfully. Sep 4 23:48:16.284813 sshd[4482]: Connection closed by 139.178.68.195 port 41732 Sep 4 23:48:16.285693 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:16.291037 systemd[1]: sshd@25-10.128.0.91:22-139.178.68.195:41732.service: Deactivated successfully. Sep 4 23:48:16.294200 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:48:16.298197 systemd-logind[1462]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:48:16.300118 systemd-logind[1462]: Removed session 26. Sep 4 23:48:16.324915 containerd[1483]: time="2025-09-04T23:48:16.324818820Z" level=info msg="shim disconnected" id=19891e8412349fc4d84d48df50c1008158bebc7881b026039e353a42cbfdedf0 namespace=k8s.io Sep 4 23:48:16.324915 containerd[1483]: time="2025-09-04T23:48:16.324913029Z" level=warning msg="cleaning up after shim disconnected" id=19891e8412349fc4d84d48df50c1008158bebc7881b026039e353a42cbfdedf0 namespace=k8s.io Sep 4 23:48:16.324915 containerd[1483]: time="2025-09-04T23:48:16.324948969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:16.349162 containerd[1483]: time="2025-09-04T23:48:16.348969569Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:48:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:48:16.351058 systemd[1]: Started sshd@26-10.128.0.91:22-139.178.68.195:41740.service - OpenSSH per-connection server daemon (139.178.68.195:41740). Sep 4 23:48:16.661626 sshd[4559]: Accepted publickey for core from 139.178.68.195 port 41740 ssh2: RSA SHA256:s25R9jMJ2r9X49pTCObjvm1k14QyrX8IlEfg67QbIEc Sep 4 23:48:16.663673 sshd-session[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:16.672004 systemd-logind[1462]: New session 27 of user core. Sep 4 23:48:16.675168 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:48:17.038276 containerd[1483]: time="2025-09-04T23:48:17.037070968Z" level=info msg="CreateContainer within sandbox \"97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:48:17.059694 containerd[1483]: time="2025-09-04T23:48:17.058545758Z" level=info msg="CreateContainer within sandbox \"97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c063ff979852f3acc5607683d451621d189aa58ec5daf07744f816c028042768\"" Sep 4 23:48:17.062535 containerd[1483]: time="2025-09-04T23:48:17.062030411Z" level=info msg="StartContainer for \"c063ff979852f3acc5607683d451621d189aa58ec5daf07744f816c028042768\"" Sep 4 23:48:17.134539 systemd[1]: Started cri-containerd-c063ff979852f3acc5607683d451621d189aa58ec5daf07744f816c028042768.scope - libcontainer container c063ff979852f3acc5607683d451621d189aa58ec5daf07744f816c028042768. Sep 4 23:48:17.185499 containerd[1483]: time="2025-09-04T23:48:17.185409650Z" level=info msg="StartContainer for \"c063ff979852f3acc5607683d451621d189aa58ec5daf07744f816c028042768\" returns successfully" Sep 4 23:48:17.200303 systemd[1]: cri-containerd-c063ff979852f3acc5607683d451621d189aa58ec5daf07744f816c028042768.scope: Deactivated successfully. Sep 4 23:48:17.239869 containerd[1483]: time="2025-09-04T23:48:17.239461036Z" level=info msg="shim disconnected" id=c063ff979852f3acc5607683d451621d189aa58ec5daf07744f816c028042768 namespace=k8s.io Sep 4 23:48:17.239869 containerd[1483]: time="2025-09-04T23:48:17.239554637Z" level=warning msg="cleaning up after shim disconnected" id=c063ff979852f3acc5607683d451621d189aa58ec5daf07744f816c028042768 namespace=k8s.io Sep 4 23:48:17.239869 containerd[1483]: time="2025-09-04T23:48:17.239572032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:17.936978 systemd[1]: run-containerd-runc-k8s.io-c063ff979852f3acc5607683d451621d189aa58ec5daf07744f816c028042768-runc.tkiAQD.mount: Deactivated successfully. Sep 4 23:48:17.937179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c063ff979852f3acc5607683d451621d189aa58ec5daf07744f816c028042768-rootfs.mount: Deactivated successfully. Sep 4 23:48:18.040089 containerd[1483]: time="2025-09-04T23:48:18.039781357Z" level=info msg="CreateContainer within sandbox \"97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:48:18.069893 containerd[1483]: time="2025-09-04T23:48:18.069822920Z" level=info msg="CreateContainer within sandbox \"97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c663e0ff72f5943426beb9a69792c92d5152f53e977f61d796ebcbf9b3426df2\"" Sep 4 23:48:18.070892 containerd[1483]: time="2025-09-04T23:48:18.070842186Z" level=info msg="StartContainer for \"c663e0ff72f5943426beb9a69792c92d5152f53e977f61d796ebcbf9b3426df2\"" Sep 4 23:48:18.135199 systemd[1]: Started cri-containerd-c663e0ff72f5943426beb9a69792c92d5152f53e977f61d796ebcbf9b3426df2.scope - libcontainer container c663e0ff72f5943426beb9a69792c92d5152f53e977f61d796ebcbf9b3426df2. Sep 4 23:48:18.189771 containerd[1483]: time="2025-09-04T23:48:18.187509198Z" level=info msg="StartContainer for \"c663e0ff72f5943426beb9a69792c92d5152f53e977f61d796ebcbf9b3426df2\" returns successfully" Sep 4 23:48:18.195385 systemd[1]: cri-containerd-c663e0ff72f5943426beb9a69792c92d5152f53e977f61d796ebcbf9b3426df2.scope: Deactivated successfully. Sep 4 23:48:18.230654 containerd[1483]: time="2025-09-04T23:48:18.230559169Z" level=info msg="shim disconnected" id=c663e0ff72f5943426beb9a69792c92d5152f53e977f61d796ebcbf9b3426df2 namespace=k8s.io Sep 4 23:48:18.230654 containerd[1483]: time="2025-09-04T23:48:18.230637015Z" level=warning msg="cleaning up after shim disconnected" id=c663e0ff72f5943426beb9a69792c92d5152f53e977f61d796ebcbf9b3426df2 namespace=k8s.io Sep 4 23:48:18.230654 containerd[1483]: time="2025-09-04T23:48:18.230651637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:18.937376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c663e0ff72f5943426beb9a69792c92d5152f53e977f61d796ebcbf9b3426df2-rootfs.mount: Deactivated successfully. Sep 4 23:48:19.045755 containerd[1483]: time="2025-09-04T23:48:19.045485289Z" level=info msg="CreateContainer within sandbox \"97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:48:19.074648 containerd[1483]: time="2025-09-04T23:48:19.074407735Z" level=info msg="CreateContainer within sandbox \"97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f2a577d250aa86812115d5d90915ea7b381f738d17f138399b9b0dfcd1bd8b3c\"" Sep 4 23:48:19.076081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896474878.mount: Deactivated successfully. Sep 4 23:48:19.078284 containerd[1483]: time="2025-09-04T23:48:19.078221600Z" level=info msg="StartContainer for \"f2a577d250aa86812115d5d90915ea7b381f738d17f138399b9b0dfcd1bd8b3c\"" Sep 4 23:48:19.152387 systemd[1]: Started cri-containerd-f2a577d250aa86812115d5d90915ea7b381f738d17f138399b9b0dfcd1bd8b3c.scope - libcontainer container f2a577d250aa86812115d5d90915ea7b381f738d17f138399b9b0dfcd1bd8b3c. Sep 4 23:48:19.208104 systemd[1]: cri-containerd-f2a577d250aa86812115d5d90915ea7b381f738d17f138399b9b0dfcd1bd8b3c.scope: Deactivated successfully. Sep 4 23:48:19.216442 containerd[1483]: time="2025-09-04T23:48:19.214002210Z" level=info msg="StartContainer for \"f2a577d250aa86812115d5d90915ea7b381f738d17f138399b9b0dfcd1bd8b3c\" returns successfully" Sep 4 23:48:19.255868 containerd[1483]: time="2025-09-04T23:48:19.255768577Z" level=info msg="shim disconnected" id=f2a577d250aa86812115d5d90915ea7b381f738d17f138399b9b0dfcd1bd8b3c namespace=k8s.io Sep 4 23:48:19.255868 containerd[1483]: time="2025-09-04T23:48:19.255860529Z" level=warning msg="cleaning up after shim disconnected" id=f2a577d250aa86812115d5d90915ea7b381f738d17f138399b9b0dfcd1bd8b3c namespace=k8s.io Sep 4 23:48:19.255868 containerd[1483]: time="2025-09-04T23:48:19.255875353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:19.937473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2a577d250aa86812115d5d90915ea7b381f738d17f138399b9b0dfcd1bd8b3c-rootfs.mount: Deactivated successfully. Sep 4 23:48:20.052618 containerd[1483]: time="2025-09-04T23:48:20.052529606Z" level=info msg="CreateContainer within sandbox \"97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:48:20.086421 containerd[1483]: time="2025-09-04T23:48:20.086350782Z" level=info msg="CreateContainer within sandbox \"97e31b43423f923a8c678132598c8f0e38b976b2c5c55c47de01ceb5c696e053\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2da375c598b170e5b6c431cd35bbeadfe94caff527ac56e0d8a94d4b07203d74\"" Sep 4 23:48:20.088111 containerd[1483]: time="2025-09-04T23:48:20.087290410Z" level=info msg="StartContainer for \"2da375c598b170e5b6c431cd35bbeadfe94caff527ac56e0d8a94d4b07203d74\"" Sep 4 23:48:20.088827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3620078350.mount: Deactivated successfully. Sep 4 23:48:20.162333 systemd[1]: Started cri-containerd-2da375c598b170e5b6c431cd35bbeadfe94caff527ac56e0d8a94d4b07203d74.scope - libcontainer container 2da375c598b170e5b6c431cd35bbeadfe94caff527ac56e0d8a94d4b07203d74. Sep 4 23:48:20.221393 containerd[1483]: time="2025-09-04T23:48:20.221089842Z" level=info msg="StartContainer for \"2da375c598b170e5b6c431cd35bbeadfe94caff527ac56e0d8a94d4b07203d74\" returns successfully" Sep 4 23:48:20.444984 kubelet[2639]: E0904 23:48:20.442874 2639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-nlq9f" podUID="1ddcd78b-47cd-4dd6-8941-5dde374e9a89" Sep 4 23:48:20.493498 containerd[1483]: time="2025-09-04T23:48:20.492346409Z" level=info msg="StopPodSandbox for \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\"" Sep 4 23:48:20.493498 containerd[1483]: time="2025-09-04T23:48:20.492721576Z" level=info msg="TearDown network for sandbox \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\" successfully" Sep 4 23:48:20.493498 containerd[1483]: time="2025-09-04T23:48:20.492765992Z" level=info msg="StopPodSandbox for \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\" returns successfully" Sep 4 23:48:20.499021 containerd[1483]: time="2025-09-04T23:48:20.496990903Z" level=info msg="RemovePodSandbox for \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\"" Sep 4 23:48:20.499021 containerd[1483]: time="2025-09-04T23:48:20.497080354Z" level=info msg="Forcibly stopping sandbox \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\"" Sep 4 23:48:20.499021 containerd[1483]: time="2025-09-04T23:48:20.497207451Z" level=info msg="TearDown network for sandbox \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\" successfully" Sep 4 23:48:20.506304 containerd[1483]: time="2025-09-04T23:48:20.505758155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:48:20.506778 containerd[1483]: time="2025-09-04T23:48:20.506742710Z" level=info msg="RemovePodSandbox \"b811ff7e8c120dd54a2ad35f5edf0ea6cee1ff3f3cf259ec7adca9d203ae7071\" returns successfully" Sep 4 23:48:20.508806 containerd[1483]: time="2025-09-04T23:48:20.508768222Z" level=info msg="StopPodSandbox for \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\"" Sep 4 23:48:20.509742 containerd[1483]: time="2025-09-04T23:48:20.509705301Z" level=info msg="TearDown network for sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" successfully" Sep 4 23:48:20.510476 containerd[1483]: time="2025-09-04T23:48:20.509988988Z" level=info msg="StopPodSandbox for \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" returns successfully" Sep 4 23:48:20.514949 containerd[1483]: time="2025-09-04T23:48:20.512558807Z" level=info msg="RemovePodSandbox for \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\"" Sep 4 23:48:20.514949 containerd[1483]: time="2025-09-04T23:48:20.512602353Z" level=info msg="Forcibly stopping sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\"" Sep 4 23:48:20.514949 containerd[1483]: time="2025-09-04T23:48:20.512715305Z" level=info msg="TearDown network for sandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" successfully" Sep 4 23:48:20.523770 containerd[1483]: time="2025-09-04T23:48:20.523506860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:48:20.524272 containerd[1483]: time="2025-09-04T23:48:20.524238183Z" level=info msg="RemovePodSandbox \"175c26662c0267a01720b864d6dcc6176175f8d7b39a38dba30acbed893434f3\" returns successfully" Sep 4 23:48:20.798972 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 4 23:48:20.940501 systemd[1]: run-containerd-runc-k8s.io-2da375c598b170e5b6c431cd35bbeadfe94caff527ac56e0d8a94d4b07203d74-runc.yOdSGW.mount: Deactivated successfully. Sep 4 23:48:21.076952 kubelet[2639]: I0904 23:48:21.076856 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w8z8p" podStartSLOduration=6.076828509 podStartE2EDuration="6.076828509s" podCreationTimestamp="2025-09-04 23:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:48:21.076110881 +0000 UTC m=+120.818744981" watchObservedRunningTime="2025-09-04 23:48:21.076828509 +0000 UTC m=+120.819462608" Sep 4 23:48:23.219166 systemd[1]: run-containerd-runc-k8s.io-2da375c598b170e5b6c431cd35bbeadfe94caff527ac56e0d8a94d4b07203d74-runc.YcEQC8.mount: Deactivated successfully. Sep 4 23:48:24.477145 systemd-networkd[1384]: lxc_health: Link UP Sep 4 23:48:24.483732 systemd-networkd[1384]: lxc_health: Gained carrier Sep 4 23:48:25.485567 systemd[1]: run-containerd-runc-k8s.io-2da375c598b170e5b6c431cd35bbeadfe94caff527ac56e0d8a94d4b07203d74-runc.hXl7y7.mount: Deactivated successfully. Sep 4 23:48:25.912797 systemd-networkd[1384]: lxc_health: Gained IPv6LL Sep 4 23:48:27.799989 systemd[1]: run-containerd-runc-k8s.io-2da375c598b170e5b6c431cd35bbeadfe94caff527ac56e0d8a94d4b07203d74-runc.Bd7eLn.mount: Deactivated successfully. Sep 4 23:48:28.017022 ntpd[1444]: Listen normally on 15 lxc_health [fe80::74e8:3fff:fe24:9818%14]:123 Sep 4 23:48:28.018180 ntpd[1444]: 4 Sep 23:48:28 ntpd[1444]: Listen normally on 15 lxc_health [fe80::74e8:3fff:fe24:9818%14]:123 Sep 4 23:48:30.220729 sshd[4561]: Connection closed by 139.178.68.195 port 41740 Sep 4 23:48:30.221366 sshd-session[4559]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:30.230370 systemd-logind[1462]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:48:30.231545 systemd[1]: sshd@26-10.128.0.91:22-139.178.68.195:41740.service: Deactivated successfully. Sep 4 23:48:30.238389 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:48:30.244235 systemd-logind[1462]: Removed session 27.