Sep 12 10:16:30.153677 kernel: Linux version 6.6.105-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 08:42:12 -00 2025 Sep 12 10:16:30.153734 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:16:30.153751 kernel: BIOS-provided physical RAM map: Sep 12 10:16:30.153766 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 12 10:16:30.153779 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 12 10:16:30.153793 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 12 10:16:30.153812 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 12 10:16:30.153827 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 12 10:16:30.153849 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd329fff] usable Sep 12 10:16:30.153863 kernel: BIOS-e820: [mem 0x00000000bd32a000-0x00000000bd331fff] ACPI data Sep 12 10:16:30.153877 kernel: BIOS-e820: [mem 0x00000000bd332000-0x00000000bf8ecfff] usable Sep 12 10:16:30.153892 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 12 10:16:30.153906 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 12 10:16:30.153922 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 12 10:16:30.153946 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 12 10:16:30.153962 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 12 10:16:30.153979 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 12 10:16:30.153996 kernel: NX (Execute Disable) protection: active Sep 12 10:16:30.154011 kernel: APIC: Static calls initialized Sep 12 10:16:30.154037 kernel: efi: EFI v2.7 by EDK II Sep 12 10:16:30.154053 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32a018 Sep 12 10:16:30.154069 kernel: random: crng init done Sep 12 10:16:30.154083 kernel: secureboot: Secure boot disabled Sep 12 10:16:30.154125 kernel: SMBIOS 2.4 present. Sep 12 10:16:30.154150 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 12 10:16:30.154166 kernel: Hypervisor detected: KVM Sep 12 10:16:30.154182 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 10:16:30.154198 kernel: kvm-clock: using sched offset of 13349859218 cycles Sep 12 10:16:30.154212 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 10:16:30.154227 kernel: tsc: Detected 2299.998 MHz processor Sep 12 10:16:30.154242 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 10:16:30.154260 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 10:16:30.154276 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 12 10:16:30.154293 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 12 10:16:30.154315 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 10:16:30.154332 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 12 10:16:30.154348 kernel: Using GB pages for direct mapping Sep 12 10:16:30.154365 kernel: ACPI: Early table checksum verification disabled Sep 12 10:16:30.154381 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 12 10:16:30.154399 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 12 10:16:30.154422 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 12 10:16:30.154445 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 12 10:16:30.154462 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 12 10:16:30.154480 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 12 10:16:30.154498 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 12 10:16:30.154516 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 12 10:16:30.154534 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 12 10:16:30.154552 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 12 10:16:30.154574 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 12 10:16:30.154600 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 12 10:16:30.154618 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 12 10:16:30.154635 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 12 10:16:30.154654 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 12 10:16:30.154671 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 12 10:16:30.154689 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 12 10:16:30.154707 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 12 10:16:30.154724 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 12 10:16:30.154747 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 12 10:16:30.154764 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 10:16:30.154783 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 10:16:30.154801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 10:16:30.154819 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 12 10:16:30.154835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 12 10:16:30.154853 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 12 10:16:30.154870 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 12 10:16:30.154888 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Sep 12 10:16:30.154911 kernel: Zone ranges: Sep 12 10:16:30.154930 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 10:16:30.154947 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 10:16:30.154965 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 12 10:16:30.154982 kernel: Movable zone start for each node Sep 12 10:16:30.155000 kernel: Early memory node ranges Sep 12 10:16:30.155018 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 12 10:16:30.155037 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 12 10:16:30.155054 kernel: node 0: [mem 0x0000000000100000-0x00000000bd329fff] Sep 12 10:16:30.155077 kernel: node 0: [mem 0x00000000bd332000-0x00000000bf8ecfff] Sep 12 10:16:30.155110 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 12 10:16:30.155129 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 12 10:16:30.155147 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 12 10:16:30.155165 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 10:16:30.155183 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 12 10:16:30.155200 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 12 10:16:30.155218 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Sep 12 10:16:30.155235 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 10:16:30.155258 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 12 10:16:30.155276 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 10:16:30.155294 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 10:16:30.155312 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 10:16:30.155330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 10:16:30.155347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 10:16:30.155365 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 10:16:30.155383 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 10:16:30.155401 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 10:16:30.155424 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 10:16:30.155442 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 12 10:16:30.155459 kernel: Booting paravirtualized kernel on KVM Sep 12 10:16:30.155477 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 10:16:30.155495 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 10:16:30.155513 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 10:16:30.155530 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 10:16:30.155548 kernel: pcpu-alloc: [0] 0 1 Sep 12 10:16:30.155565 kernel: kvm-guest: PV spinlocks enabled Sep 12 10:16:30.155594 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 10:16:30.155615 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:16:30.155634 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 10:16:30.155652 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 12 10:16:30.155669 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 10:16:30.155687 kernel: Fallback order for Node 0: 0 Sep 12 10:16:30.155705 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Sep 12 10:16:30.155723 kernel: Policy zone: Normal Sep 12 10:16:30.155745 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 10:16:30.155763 kernel: software IO TLB: area num 2. Sep 12 10:16:30.155782 kernel: Memory: 7511324K/7860552K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 348972K reserved, 0K cma-reserved) Sep 12 10:16:30.155800 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 10:16:30.155818 kernel: Kernel/User page tables isolation: enabled Sep 12 10:16:30.155836 kernel: ftrace: allocating 37946 entries in 149 pages Sep 12 10:16:30.155853 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 10:16:30.155872 kernel: Dynamic Preempt: voluntary Sep 12 10:16:30.155910 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 10:16:30.155937 kernel: rcu: RCU event tracing is enabled. Sep 12 10:16:30.155956 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 10:16:30.155975 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 10:16:30.156000 kernel: Rude variant of Tasks RCU enabled. Sep 12 10:16:30.156019 kernel: Tracing variant of Tasks RCU enabled. Sep 12 10:16:30.156038 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 10:16:30.156058 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 10:16:30.156078 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 10:16:30.156114 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 10:16:30.156130 kernel: Console: colour dummy device 80x25 Sep 12 10:16:30.156147 kernel: printk: console [ttyS0] enabled Sep 12 10:16:30.156165 kernel: ACPI: Core revision 20230628 Sep 12 10:16:30.156182 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 10:16:30.156199 kernel: x2apic enabled Sep 12 10:16:30.156216 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 10:16:30.156234 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 12 10:16:30.156252 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 12 10:16:30.156275 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 12 10:16:30.156293 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 12 10:16:30.156311 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 12 10:16:30.156330 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 10:16:30.156348 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 12 10:16:30.156365 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 12 10:16:30.156383 kernel: Spectre V2 : Mitigation: IBRS Sep 12 10:16:30.156401 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 10:16:30.156419 kernel: RETBleed: Mitigation: IBRS Sep 12 10:16:30.156442 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 10:16:30.156459 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 12 10:16:30.156475 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 10:16:30.156492 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 10:16:30.156511 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 10:16:30.156528 kernel: active return thunk: its_return_thunk Sep 12 10:16:30.156547 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 10:16:30.156566 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 10:16:30.156592 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 10:16:30.156613 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 10:16:30.156629 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 10:16:30.156647 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 10:16:30.156666 kernel: Freeing SMP alternatives memory: 32K Sep 12 10:16:30.156683 kernel: pid_max: default: 32768 minimum: 301 Sep 12 10:16:30.156700 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 10:16:30.156718 kernel: landlock: Up and running. Sep 12 10:16:30.156735 kernel: SELinux: Initializing. Sep 12 10:16:30.156753 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 10:16:30.156776 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 10:16:30.156793 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 12 10:16:30.156811 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:16:30.156828 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:16:30.156845 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 10:16:30.156865 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 12 10:16:30.156882 kernel: signal: max sigframe size: 1776 Sep 12 10:16:30.156900 kernel: rcu: Hierarchical SRCU implementation. Sep 12 10:16:30.156925 kernel: rcu: Max phase no-delay instances is 400. Sep 12 10:16:30.156943 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 10:16:30.156960 kernel: smp: Bringing up secondary CPUs ... Sep 12 10:16:30.156977 kernel: smpboot: x86: Booting SMP configuration: Sep 12 10:16:30.156993 kernel: .... node #0, CPUs: #1 Sep 12 10:16:30.157011 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 10:16:30.157030 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 10:16:30.157048 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 10:16:30.157066 kernel: smpboot: Max logical packages: 1 Sep 12 10:16:30.157089 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 12 10:16:30.157131 kernel: devtmpfs: initialized Sep 12 10:16:30.157150 kernel: x86/mm: Memory block size: 128MB Sep 12 10:16:30.157169 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 12 10:16:30.157185 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 10:16:30.157199 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 10:16:30.157214 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 10:16:30.157231 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 10:16:30.157249 kernel: audit: initializing netlink subsys (disabled) Sep 12 10:16:30.157273 kernel: audit: type=2000 audit(1757672188.633:1): state=initialized audit_enabled=0 res=1 Sep 12 10:16:30.157288 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 10:16:30.157304 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 10:16:30.157320 kernel: cpuidle: using governor menu Sep 12 10:16:30.157336 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 10:16:30.157351 kernel: dca service started, version 1.12.1 Sep 12 10:16:30.157369 kernel: PCI: Using configuration type 1 for base access Sep 12 10:16:30.157385 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 10:16:30.157400 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 10:16:30.157424 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 10:16:30.157442 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 10:16:30.157460 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 10:16:30.157478 kernel: ACPI: Added _OSI(Module Device) Sep 12 10:16:30.157497 kernel: ACPI: Added _OSI(Processor Device) Sep 12 10:16:30.157515 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 10:16:30.157533 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 10:16:30.157552 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 10:16:30.157570 kernel: ACPI: Interpreter enabled Sep 12 10:16:30.157600 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 10:16:30.157619 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 10:16:30.157637 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 10:16:30.157655 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 10:16:30.157674 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 12 10:16:30.157691 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 10:16:30.157984 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 10:16:30.158218 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 10:16:30.158415 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 10:16:30.158438 kernel: PCI host bridge to bus 0000:00 Sep 12 10:16:30.158643 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 10:16:30.158824 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 10:16:30.159000 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 10:16:30.159229 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 12 10:16:30.159412 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 10:16:30.159650 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 10:16:30.159862 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 12 10:16:30.160069 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 12 10:16:30.160299 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 10:16:30.160510 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 12 10:16:30.160734 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 12 10:16:30.160936 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 12 10:16:30.161204 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 10:16:30.161446 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 12 10:16:30.161705 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 12 10:16:30.161913 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 10:16:30.162133 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 12 10:16:30.162337 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 12 10:16:30.162361 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 10:16:30.162381 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 10:16:30.162399 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 10:16:30.162418 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 10:16:30.162436 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 10:16:30.162455 kernel: iommu: Default domain type: Translated Sep 12 10:16:30.162473 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 10:16:30.162491 kernel: efivars: Registered efivars operations Sep 12 10:16:30.162516 kernel: PCI: Using ACPI for IRQ routing Sep 12 10:16:30.162535 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 10:16:30.162553 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 12 10:16:30.162571 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 12 10:16:30.162597 kernel: e820: reserve RAM buffer [mem 0xbd32a000-0xbfffffff] Sep 12 10:16:30.162615 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 12 10:16:30.162633 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 12 10:16:30.162651 kernel: vgaarb: loaded Sep 12 10:16:30.162669 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 10:16:30.162693 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 10:16:30.162711 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 10:16:30.162730 kernel: pnp: PnP ACPI init Sep 12 10:16:30.162748 kernel: pnp: PnP ACPI: found 7 devices Sep 12 10:16:30.162767 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 10:16:30.162785 kernel: NET: Registered PF_INET protocol family Sep 12 10:16:30.162804 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 10:16:30.162823 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 12 10:16:30.162841 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 10:16:30.162865 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 10:16:30.162884 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 10:16:30.162903 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 12 10:16:30.162922 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 10:16:30.162941 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 10:16:30.162959 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 10:16:30.162977 kernel: NET: Registered PF_XDP protocol family Sep 12 10:16:30.163209 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 10:16:30.163388 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 10:16:30.163556 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 10:16:30.163733 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 12 10:16:30.163927 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 10:16:30.163951 kernel: PCI: CLS 0 bytes, default 64 Sep 12 10:16:30.163970 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 10:16:30.163990 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 12 10:16:30.164009 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 10:16:30.164033 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 12 10:16:30.164052 kernel: clocksource: Switched to clocksource tsc Sep 12 10:16:30.164070 kernel: Initialise system trusted keyrings Sep 12 10:16:30.164089 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 12 10:16:30.164131 kernel: Key type asymmetric registered Sep 12 10:16:30.164149 kernel: Asymmetric key parser 'x509' registered Sep 12 10:16:30.164168 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 10:16:30.164187 kernel: io scheduler mq-deadline registered Sep 12 10:16:30.164206 kernel: io scheduler kyber registered Sep 12 10:16:30.164230 kernel: io scheduler bfq registered Sep 12 10:16:30.164249 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 10:16:30.164269 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 10:16:30.164466 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 12 10:16:30.164490 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 12 10:16:30.164685 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 12 10:16:30.164709 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 10:16:30.164896 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 12 10:16:30.164926 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 10:16:30.164944 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 10:16:30.164963 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 10:16:30.164982 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 12 10:16:30.165000 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 12 10:16:30.165252 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 12 10:16:30.165279 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 10:16:30.165297 kernel: i8042: Warning: Keylock active Sep 12 10:16:30.165322 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 10:16:30.165340 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 10:16:30.165545 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 10:16:30.165733 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 10:16:30.165918 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T10:16:29 UTC (1757672189) Sep 12 10:16:30.166092 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 10:16:30.166127 kernel: intel_pstate: CPU model not supported Sep 12 10:16:30.166146 kernel: pstore: Using crash dump compression: deflate Sep 12 10:16:30.166169 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 10:16:30.166184 kernel: NET: Registered PF_INET6 protocol family Sep 12 10:16:30.166200 kernel: Segment Routing with IPv6 Sep 12 10:16:30.166218 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 10:16:30.166236 kernel: NET: Registered PF_PACKET protocol family Sep 12 10:16:30.166255 kernel: Key type dns_resolver registered Sep 12 10:16:30.166273 kernel: IPI shorthand broadcast: enabled Sep 12 10:16:30.166291 kernel: sched_clock: Marking stable (1010004689, 169391017)->(1221335064, -41939358) Sep 12 10:16:30.166310 kernel: registered taskstats version 1 Sep 12 10:16:30.166334 kernel: Loading compiled-in X.509 certificates Sep 12 10:16:30.166349 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.105-flatcar: 0972efc09ee0bcd53f8cdb5573e11871ce7b16a9' Sep 12 10:16:30.166365 kernel: Key type .fscrypt registered Sep 12 10:16:30.166382 kernel: Key type fscrypt-provisioning registered Sep 12 10:16:30.166399 kernel: ima: Allocated hash algorithm: sha1 Sep 12 10:16:30.166415 kernel: ima: No architecture policies found Sep 12 10:16:30.166431 kernel: clk: Disabling unused clocks Sep 12 10:16:30.166447 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 12 10:16:30.166464 kernel: Write protecting the kernel read-only data: 38912k Sep 12 10:16:30.166489 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 10:16:30.166507 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 12 10:16:30.166526 kernel: Run /init as init process Sep 12 10:16:30.166545 kernel: with arguments: Sep 12 10:16:30.166563 kernel: /init Sep 12 10:16:30.166602 kernel: with environment: Sep 12 10:16:30.166620 kernel: HOME=/ Sep 12 10:16:30.166638 kernel: TERM=linux Sep 12 10:16:30.166657 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 10:16:30.166682 systemd[1]: Successfully made /usr/ read-only. Sep 12 10:16:30.166707 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:16:30.166727 systemd[1]: Detected virtualization google. Sep 12 10:16:30.166745 systemd[1]: Detected architecture x86-64. Sep 12 10:16:30.166764 systemd[1]: Running in initrd. Sep 12 10:16:30.166783 systemd[1]: No hostname configured, using default hostname. Sep 12 10:16:30.166803 systemd[1]: Hostname set to . Sep 12 10:16:30.166828 systemd[1]: Initializing machine ID from random generator. Sep 12 10:16:30.166846 systemd[1]: Queued start job for default target initrd.target. Sep 12 10:16:30.166865 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:16:30.166884 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:16:30.166904 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 10:16:30.166924 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:16:30.166943 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 10:16:30.166969 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 10:16:30.167010 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 10:16:30.167035 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 10:16:30.167055 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:16:30.167076 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:16:30.167122 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:16:30.167142 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:16:30.167162 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:16:30.167182 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:16:30.167202 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:16:30.167223 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:16:30.167242 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 10:16:30.167263 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 10:16:30.167283 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:16:30.167309 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:16:30.167329 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:16:30.167348 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:16:30.167369 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 10:16:30.167390 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:16:30.167409 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 10:16:30.167435 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 10:16:30.167455 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:16:30.167476 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:16:30.167501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:16:30.167568 systemd-journald[184]: Collecting audit messages is disabled. Sep 12 10:16:30.167620 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 10:16:30.167641 systemd-journald[184]: Journal started Sep 12 10:16:30.167693 systemd-journald[184]: Runtime Journal (/run/log/journal/3a6f1a350afe4e18b0f2e255ebc23eaf) is 8M, max 148.6M, 140.6M free. Sep 12 10:16:30.170919 systemd-modules-load[185]: Inserted module 'overlay' Sep 12 10:16:30.175753 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:16:30.185333 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:16:30.187164 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 10:16:30.198355 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 10:16:30.212031 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:16:30.218336 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:16:30.223244 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 10:16:30.226430 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 12 10:16:30.230229 kernel: Bridge firewalling registered Sep 12 10:16:30.235994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:16:30.244269 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:16:30.248712 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:16:30.252916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:16:30.273387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:16:30.276336 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:16:30.293083 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:16:30.300413 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:16:30.307592 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:16:30.314678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:16:30.330416 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 10:16:30.356070 dracut-cmdline[219]: dracut-dracut-053 Sep 12 10:16:30.360559 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:16:30.371506 systemd-resolved[214]: Positive Trust Anchors: Sep 12 10:16:30.371529 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:16:30.371591 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:16:30.377264 systemd-resolved[214]: Defaulting to hostname 'linux'. Sep 12 10:16:30.379212 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:16:30.406479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:16:30.468148 kernel: SCSI subsystem initialized Sep 12 10:16:30.481144 kernel: Loading iSCSI transport class v2.0-870. Sep 12 10:16:30.494146 kernel: iscsi: registered transport (tcp) Sep 12 10:16:30.520153 kernel: iscsi: registered transport (qla4xxx) Sep 12 10:16:30.520267 kernel: QLogic iSCSI HBA Driver Sep 12 10:16:30.579218 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 10:16:30.586515 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 10:16:30.669987 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 10:16:30.670156 kernel: device-mapper: uevent: version 1.0.3 Sep 12 10:16:30.670189 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 10:16:30.729191 kernel: raid6: avx2x4 gen() 22508 MB/s Sep 12 10:16:30.750149 kernel: raid6: avx2x2 gen() 23151 MB/s Sep 12 10:16:30.776218 kernel: raid6: avx2x1 gen() 20573 MB/s Sep 12 10:16:30.776319 kernel: raid6: using algorithm avx2x2 gen() 23151 MB/s Sep 12 10:16:30.803312 kernel: raid6: .... xor() 18361 MB/s, rmw enabled Sep 12 10:16:30.803441 kernel: raid6: using avx2x2 recovery algorithm Sep 12 10:16:30.833161 kernel: xor: automatically using best checksumming function avx Sep 12 10:16:31.015146 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 10:16:31.029614 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:16:31.034340 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:16:31.097628 systemd-udevd[402]: Using default interface naming scheme 'v255'. Sep 12 10:16:31.105971 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:16:31.124336 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 10:16:31.174575 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Sep 12 10:16:31.215867 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:16:31.232330 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:16:31.341882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:16:31.375335 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 10:16:31.419443 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 10:16:31.432678 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:16:31.459279 kernel: scsi host0: Virtio SCSI HBA Sep 12 10:16:31.477600 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:16:31.517220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:16:31.550287 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 12 10:16:31.556302 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 10:16:31.549094 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 10:16:31.584170 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 10:16:31.584219 kernel: AES CTR mode by8 optimization enabled Sep 12 10:16:31.583729 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:16:31.583962 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:16:31.625890 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 12 10:16:31.626311 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 12 10:16:31.633296 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 12 10:16:31.633638 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 12 10:16:31.634212 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 10:16:31.633215 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:16:31.717838 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 10:16:31.717881 kernel: GPT:17805311 != 25165823 Sep 12 10:16:31.717913 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 10:16:31.717937 kernel: GPT:17805311 != 25165823 Sep 12 10:16:31.717962 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 10:16:31.717986 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 10:16:31.718020 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 12 10:16:31.660209 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:16:31.660553 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:16:31.694077 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:16:31.785338 kernel: BTRFS: device fsid 2566299d-dd4a-4826-ba43-7397a17991fb devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (466) Sep 12 10:16:31.785390 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (457) Sep 12 10:16:31.699328 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:16:31.750189 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:16:31.755186 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:16:31.833712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:16:31.884342 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 12 10:16:31.897290 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 12 10:16:31.930617 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 12 10:16:31.941352 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 12 10:16:31.961317 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 12 10:16:31.989336 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 10:16:32.026134 disk-uuid[543]: Primary Header is updated. Sep 12 10:16:32.026134 disk-uuid[543]: Secondary Entries is updated. Sep 12 10:16:32.026134 disk-uuid[543]: Secondary Header is updated. Sep 12 10:16:32.027218 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:16:32.079354 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 10:16:32.079401 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 10:16:32.123030 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:16:33.092354 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 10:16:33.092456 disk-uuid[545]: The operation has completed successfully. Sep 12 10:16:33.178618 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 10:16:33.178800 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 10:16:33.238415 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 10:16:33.258621 sh[567]: Success Sep 12 10:16:33.272390 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 10:16:33.369208 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 10:16:33.377271 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 10:16:33.406049 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 10:16:33.453752 kernel: BTRFS info (device dm-0): first mount of filesystem 2566299d-dd4a-4826-ba43-7397a17991fb Sep 12 10:16:33.453868 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:16:33.453894 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 10:16:33.470068 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 10:16:33.470154 kernel: BTRFS info (device dm-0): using free space tree Sep 12 10:16:33.506143 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 10:16:33.515710 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 10:16:33.516193 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 10:16:33.526433 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 10:16:33.596113 kernel: BTRFS info (device sda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:16:33.596185 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:16:33.596210 kernel: BTRFS info (device sda6): using free space tree Sep 12 10:16:33.545070 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 10:16:33.621321 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 10:16:33.621396 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 10:16:33.632206 kernel: BTRFS info (device sda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:16:33.644207 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 10:16:33.675387 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 10:16:33.754008 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:16:33.781837 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:16:33.875241 ignition[685]: Ignition 2.20.0 Sep 12 10:16:33.875817 ignition[685]: Stage: fetch-offline Sep 12 10:16:33.879654 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:16:33.875913 ignition[685]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:33.884963 systemd-networkd[748]: lo: Link UP Sep 12 10:16:33.875932 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 10:16:33.884969 systemd-networkd[748]: lo: Gained carrier Sep 12 10:16:33.876333 ignition[685]: parsed url from cmdline: "" Sep 12 10:16:33.886949 systemd-networkd[748]: Enumeration completed Sep 12 10:16:33.876345 ignition[685]: no config URL provided Sep 12 10:16:33.887595 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:16:33.876363 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 10:16:33.887604 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:16:33.876390 ignition[685]: no config at "/usr/lib/ignition/user.ign" Sep 12 10:16:33.890036 systemd-networkd[748]: eth0: Link UP Sep 12 10:16:33.876406 ignition[685]: failed to fetch config: resource requires networking Sep 12 10:16:33.890045 systemd-networkd[748]: eth0: Gained carrier Sep 12 10:16:33.876775 ignition[685]: Ignition finished successfully Sep 12 10:16:33.890058 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:16:33.975504 ignition[758]: Ignition 2.20.0 Sep 12 10:16:33.899847 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:16:33.975515 ignition[758]: Stage: fetch Sep 12 10:16:33.901222 systemd-networkd[748]: eth0: Overlong DHCP hostname received, shortened from 'ci-4230-2-2-nightly-20250911-2100-377226d477597500f469.c.flatcar-212911.internal' to 'ci-4230-2-2-nightly-20250911-2100-377226d477597500f469' Sep 12 10:16:33.975728 ignition[758]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:33.901241 systemd-networkd[748]: eth0: DHCPv4 address 10.128.0.19/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 12 10:16:33.975741 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 10:16:33.926797 systemd[1]: Reached target network.target - Network. Sep 12 10:16:33.975888 ignition[758]: parsed url from cmdline: "" Sep 12 10:16:33.949325 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 10:16:33.975895 ignition[758]: no config URL provided Sep 12 10:16:33.986654 unknown[758]: fetched base config from "system" Sep 12 10:16:33.975905 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 10:16:33.986669 unknown[758]: fetched base config from "system" Sep 12 10:16:33.975918 ignition[758]: no config at "/usr/lib/ignition/user.ign" Sep 12 10:16:33.986680 unknown[758]: fetched user config from "gcp" Sep 12 10:16:33.975952 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 12 10:16:33.990127 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 10:16:33.980846 ignition[758]: GET result: OK Sep 12 10:16:34.020922 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 10:16:33.980913 ignition[758]: parsing config with SHA512: ad3a86a2771bfcc74ddd4279c1fae5b0587fd44a65a739dfbb46253afe44f89bbf721714ce30cdfaedf87cf131938285bec6cf16d9eb026da038e37489d561ec Sep 12 10:16:34.061804 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 10:16:33.987299 ignition[758]: fetch: fetch complete Sep 12 10:16:34.085340 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 10:16:33.987306 ignition[758]: fetch: fetch passed Sep 12 10:16:34.151524 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 10:16:33.987364 ignition[758]: Ignition finished successfully Sep 12 10:16:34.174684 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 10:16:34.058758 ignition[765]: Ignition 2.20.0 Sep 12 10:16:34.182530 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 10:16:34.058768 ignition[765]: Stage: kargs Sep 12 10:16:34.214378 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:16:34.058983 ignition[765]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:34.234460 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:16:34.058995 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 10:16:34.252467 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:16:34.060207 ignition[765]: kargs: kargs passed Sep 12 10:16:34.278386 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 10:16:34.060282 ignition[765]: Ignition finished successfully Sep 12 10:16:34.148646 ignition[771]: Ignition 2.20.0 Sep 12 10:16:34.148658 ignition[771]: Stage: disks Sep 12 10:16:34.149000 ignition[771]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:34.149014 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 10:16:34.150126 ignition[771]: disks: disks passed Sep 12 10:16:34.150301 ignition[771]: Ignition finished successfully Sep 12 10:16:34.322392 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 10:16:34.507480 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 10:16:34.536269 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 10:16:34.680151 kernel: EXT4-fs (sda9): mounted filesystem 4caafea7-bbab-4a47-b77b-37af606fc08b r/w with ordered data mode. Quota mode: none. Sep 12 10:16:34.681338 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 10:16:34.682430 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 10:16:34.714338 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:16:34.725279 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 10:16:34.751660 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 10:16:34.751752 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 10:16:34.838407 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (787) Sep 12 10:16:34.838470 kernel: BTRFS info (device sda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:16:34.838497 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:16:34.838518 kernel: BTRFS info (device sda6): using free space tree Sep 12 10:16:34.838533 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 10:16:34.838565 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 10:16:34.751799 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:16:34.822090 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:16:34.847644 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 10:16:34.864365 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 10:16:35.021788 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 10:16:35.034164 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Sep 12 10:16:35.044848 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 10:16:35.056298 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 10:16:35.217345 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 10:16:35.233323 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 10:16:35.253314 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 10:16:35.287345 kernel: BTRFS info (device sda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:16:35.279063 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 10:16:35.330225 ignition[899]: INFO : Ignition 2.20.0 Sep 12 10:16:35.338458 ignition[899]: INFO : Stage: mount Sep 12 10:16:35.338458 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:35.338458 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 10:16:35.338458 ignition[899]: INFO : mount: mount passed Sep 12 10:16:35.338458 ignition[899]: INFO : Ignition finished successfully Sep 12 10:16:35.335576 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 10:16:35.351490 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 10:16:35.371300 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 10:16:35.424472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:16:35.482328 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (912) Sep 12 10:16:35.503079 kernel: BTRFS info (device sda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:16:35.503237 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:16:35.503265 kernel: BTRFS info (device sda6): using free space tree Sep 12 10:16:35.526327 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 10:16:35.526445 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 10:16:35.530901 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:16:35.578361 ignition[929]: INFO : Ignition 2.20.0 Sep 12 10:16:35.586344 ignition[929]: INFO : Stage: files Sep 12 10:16:35.586344 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:35.586344 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 10:16:35.586344 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Sep 12 10:16:35.586344 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 10:16:35.586344 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 10:16:35.651331 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 10:16:35.651331 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 10:16:35.651331 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 10:16:35.651331 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 10:16:35.651331 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 10:16:35.590592 unknown[929]: wrote ssh authorized keys file for user: core Sep 12 10:16:35.671313 systemd-networkd[748]: eth0: Gained IPv6LL Sep 12 10:16:35.741284 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 10:16:36.093801 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 10:16:36.111329 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:16:36.111329 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 10:16:36.295374 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 10:16:36.427639 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:16:36.427639 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 10:16:36.459376 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 10:16:36.753361 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 10:16:37.060441 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 10:16:37.060441 ignition[929]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 10:16:37.079530 ignition[929]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:16:37.079530 ignition[929]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:16:37.079530 ignition[929]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 10:16:37.079530 ignition[929]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 10:16:37.079530 ignition[929]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 10:16:37.079530 ignition[929]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:16:37.079530 ignition[929]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:16:37.079530 ignition[929]: INFO : files: files passed Sep 12 10:16:37.079530 ignition[929]: INFO : Ignition finished successfully Sep 12 10:16:37.066561 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 10:16:37.105586 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 10:16:37.133356 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 10:16:37.179083 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 10:16:37.317381 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:16:37.317381 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:16:37.179301 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 10:16:37.375396 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:16:37.198620 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:16:37.203707 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 10:16:37.244384 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 10:16:37.326217 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 10:16:37.326368 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 10:16:37.331693 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 10:16:37.365484 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 10:16:37.385611 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 10:16:37.392651 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 10:16:37.515879 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:16:37.533343 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 10:16:37.568709 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:16:37.580721 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:16:37.590870 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 10:16:37.619749 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 10:16:37.619998 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:16:37.646770 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 10:16:37.675705 systemd[1]: Stopped target basic.target - Basic System. Sep 12 10:16:37.676185 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 10:16:37.710624 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:16:37.711068 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 10:16:37.748578 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 10:16:37.749005 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:16:37.765758 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 10:16:37.786813 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 10:16:37.813717 systemd[1]: Stopped target swap.target - Swaps. Sep 12 10:16:37.832691 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 10:16:37.832956 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:16:37.858733 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:16:37.868719 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:16:37.906467 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 10:16:37.906860 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:16:37.936562 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 10:16:37.936979 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 10:16:37.965653 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 10:16:37.966131 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:16:37.987660 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 10:16:37.987877 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 10:16:38.015429 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 10:16:38.026332 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 10:16:38.026656 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:16:38.068858 ignition[981]: INFO : Ignition 2.20.0 Sep 12 10:16:38.068858 ignition[981]: INFO : Stage: umount Sep 12 10:16:38.093371 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:38.093371 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 12 10:16:38.093371 ignition[981]: INFO : umount: umount passed Sep 12 10:16:38.093371 ignition[981]: INFO : Ignition finished successfully Sep 12 10:16:38.084453 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 10:16:38.093558 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 10:16:38.093821 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:16:38.153929 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 10:16:38.154172 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:16:38.190063 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 10:16:38.191526 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 10:16:38.191658 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 10:16:38.206301 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 10:16:38.206457 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 10:16:38.229860 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 10:16:38.230019 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 10:16:38.236035 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 10:16:38.236131 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 10:16:38.261627 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 10:16:38.261716 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 10:16:38.280644 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 10:16:38.280760 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 10:16:38.299547 systemd[1]: Stopped target network.target - Network. Sep 12 10:16:38.307666 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 10:16:38.307794 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:16:38.343545 systemd[1]: Stopped target paths.target - Path Units. Sep 12 10:16:38.360404 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 10:16:38.365233 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:16:38.379457 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 10:16:38.403411 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 10:16:38.411553 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 10:16:38.411639 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:16:38.442658 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 10:16:38.442757 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:16:38.452578 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 10:16:38.452824 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 10:16:38.488550 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 10:16:38.488647 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 10:16:38.508514 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 10:16:38.508617 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 10:16:38.531775 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 10:16:38.558519 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 10:16:38.578877 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 10:16:38.579029 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 10:16:38.600256 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 10:16:38.600597 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 10:16:38.600806 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 10:16:38.616374 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 10:16:38.618232 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 10:16:38.618300 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:16:38.640261 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 10:16:38.652269 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 10:16:38.652523 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:16:38.692536 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:16:38.692637 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:16:38.712613 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 10:16:38.712700 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 10:16:38.732509 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 10:16:38.732609 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:16:38.762693 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:16:38.784823 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 10:16:38.784940 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:16:38.785561 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 10:16:38.785741 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:16:38.802009 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 10:16:38.802142 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 10:16:38.824636 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 10:16:39.168331 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 12 10:16:38.824694 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:16:38.852483 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 10:16:38.852590 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:16:38.882501 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 10:16:38.882721 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 10:16:38.909618 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:16:38.909735 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:16:38.947392 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 10:16:38.959511 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 10:16:38.959619 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:16:38.988702 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:16:38.988798 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:16:39.011164 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 10:16:39.011255 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:16:39.011803 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 10:16:39.011939 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 10:16:39.030837 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 10:16:39.030972 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 10:16:39.054065 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 10:16:39.069380 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 10:16:39.117363 systemd[1]: Switching root. Sep 12 10:16:39.381308 systemd-journald[184]: Journal stopped Sep 12 10:16:42.283516 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 10:16:42.283590 kernel: SELinux: policy capability open_perms=1 Sep 12 10:16:42.283614 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 10:16:42.283632 kernel: SELinux: policy capability always_check_network=0 Sep 12 10:16:42.283650 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 10:16:42.283668 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 10:16:42.283697 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 10:16:42.283717 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 10:16:42.283741 kernel: audit: type=1403 audit(1757672199.885:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 10:16:42.283765 systemd[1]: Successfully loaded SELinux policy in 97.565ms. Sep 12 10:16:42.283788 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.421ms. Sep 12 10:16:42.283812 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:16:42.283833 systemd[1]: Detected virtualization google. Sep 12 10:16:42.283853 systemd[1]: Detected architecture x86-64. Sep 12 10:16:42.283881 systemd[1]: Detected first boot. Sep 12 10:16:42.283904 systemd[1]: Initializing machine ID from random generator. Sep 12 10:16:42.283924 kernel: Guest personality initialized and is inactive Sep 12 10:16:42.283944 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 10:16:42.283963 kernel: Initialized host personality Sep 12 10:16:42.283984 zram_generator::config[1025]: No configuration found. Sep 12 10:16:42.284010 kernel: NET: Registered PF_VSOCK protocol family Sep 12 10:16:42.284030 systemd[1]: Populated /etc with preset unit settings. Sep 12 10:16:42.284054 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 10:16:42.284076 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 10:16:42.284113 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 10:16:42.284135 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 10:16:42.284156 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 10:16:42.284177 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 10:16:42.284206 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 10:16:42.284228 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 10:16:42.284249 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 10:16:42.284271 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 10:16:42.284293 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 10:16:42.284314 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 10:16:42.284335 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:16:42.284362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:16:42.284384 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 10:16:42.284405 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 10:16:42.284427 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 10:16:42.284449 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:16:42.284478 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 10:16:42.284500 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:16:42.284522 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 10:16:42.284553 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 10:16:42.284575 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 10:16:42.284598 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 10:16:42.284620 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:16:42.284643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:16:42.284666 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:16:42.284695 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:16:42.284717 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 10:16:42.284744 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 10:16:42.284766 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 10:16:42.284789 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:16:42.284812 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:16:42.284839 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:16:42.284862 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 10:16:42.284884 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 10:16:42.284907 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 10:16:42.284929 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 10:16:42.284952 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:16:42.284974 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 10:16:42.284997 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 10:16:42.285024 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 10:16:42.285050 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 10:16:42.285073 systemd[1]: Reached target machines.target - Containers. Sep 12 10:16:42.285113 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 10:16:42.285136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:16:42.285160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:16:42.285182 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 10:16:42.285205 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:16:42.285227 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:16:42.285255 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:16:42.285278 kernel: ACPI: bus type drm_connector registered Sep 12 10:16:42.285300 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 10:16:42.285322 kernel: fuse: init (API version 7.39) Sep 12 10:16:42.285348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:16:42.285371 kernel: loop: module loaded Sep 12 10:16:42.285393 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 10:16:42.285421 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 10:16:42.285442 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 10:16:42.285463 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 10:16:42.285485 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 10:16:42.285509 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:16:42.285533 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:16:42.285557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:16:42.285629 systemd-journald[1112]: Collecting audit messages is disabled. Sep 12 10:16:42.285690 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 10:16:42.285716 systemd-journald[1112]: Journal started Sep 12 10:16:42.285767 systemd-journald[1112]: Runtime Journal (/run/log/journal/0275203ecd294d70b7de29e194923a99) is 8M, max 148.6M, 140.6M free. Sep 12 10:16:40.969267 systemd[1]: Queued start job for default target multi-user.target. Sep 12 10:16:40.984211 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 10:16:40.984899 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 10:16:42.316329 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 10:16:42.334169 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 10:16:42.365237 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:16:42.390678 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 10:16:42.391134 systemd[1]: Stopped verity-setup.service. Sep 12 10:16:42.415140 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:16:42.429394 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:16:42.441072 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 10:16:42.451717 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 10:16:42.462641 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 10:16:42.472806 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 10:16:42.483639 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 10:16:42.493636 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 10:16:42.503996 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 10:16:42.515982 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:16:42.527817 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 10:16:42.528155 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 10:16:42.541063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:16:42.541475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:16:42.553811 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:16:42.554191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:16:42.564789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:16:42.565139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:16:42.576802 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 10:16:42.577142 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 10:16:42.588868 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:16:42.589226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:16:42.599933 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:16:42.610941 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 10:16:42.623970 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 10:16:42.636937 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 10:16:42.649870 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:16:42.677922 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 10:16:42.695276 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 10:16:42.722213 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 10:16:42.732404 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 10:16:42.732488 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:16:42.745442 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 10:16:42.766465 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 10:16:42.787535 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 10:16:42.797901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:16:42.806512 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 10:16:42.831062 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 10:16:42.843811 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:16:42.853161 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 10:16:42.861541 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:16:42.870356 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:16:42.887477 systemd-journald[1112]: Time spent on flushing to /var/log/journal/0275203ecd294d70b7de29e194923a99 is 137.578ms for 950 entries. Sep 12 10:16:42.887477 systemd-journald[1112]: System Journal (/var/log/journal/0275203ecd294d70b7de29e194923a99) is 8M, max 584.8M, 576.8M free. Sep 12 10:16:43.066695 systemd-journald[1112]: Received client request to flush runtime journal. Sep 12 10:16:43.066780 kernel: loop0: detected capacity change from 0 to 147912 Sep 12 10:16:43.066813 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 10:16:42.897169 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 10:16:42.920273 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 10:16:42.939401 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 10:16:42.968082 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 10:16:42.982341 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 10:16:42.997614 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 10:16:43.010012 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 10:16:43.022079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:16:43.057744 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 10:16:43.088038 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 10:16:43.100534 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 10:16:43.125352 kernel: loop1: detected capacity change from 0 to 138176 Sep 12 10:16:43.128524 udevadm[1151]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 10:16:43.146085 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 10:16:43.166404 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:16:43.181064 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 10:16:43.188633 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 10:16:43.234138 kernel: loop2: detected capacity change from 0 to 224512 Sep 12 10:16:43.276360 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Sep 12 10:16:43.276399 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Sep 12 10:16:43.302540 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:16:43.385757 kernel: loop3: detected capacity change from 0 to 52152 Sep 12 10:16:43.484137 kernel: loop4: detected capacity change from 0 to 147912 Sep 12 10:16:43.563158 kernel: loop5: detected capacity change from 0 to 138176 Sep 12 10:16:43.645163 kernel: loop6: detected capacity change from 0 to 224512 Sep 12 10:16:43.696304 kernel: loop7: detected capacity change from 0 to 52152 Sep 12 10:16:43.730258 (sd-merge)[1172]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 12 10:16:43.732044 (sd-merge)[1172]: Merged extensions into '/usr'. Sep 12 10:16:43.744819 systemd[1]: Reload requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 10:16:43.744843 systemd[1]: Reloading... Sep 12 10:16:43.938197 zram_generator::config[1196]: No configuration found. Sep 12 10:16:44.151140 ldconfig[1143]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 10:16:44.244178 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:16:44.399395 systemd[1]: Reloading finished in 652 ms. Sep 12 10:16:44.422254 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 10:16:44.433289 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 10:16:44.459195 systemd[1]: Starting ensure-sysext.service... Sep 12 10:16:44.476670 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:16:44.508404 systemd[1]: Reload requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Sep 12 10:16:44.508610 systemd[1]: Reloading... Sep 12 10:16:44.575298 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 10:16:44.577556 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 10:16:44.588183 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 10:16:44.589211 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Sep 12 10:16:44.589349 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Sep 12 10:16:44.611404 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:16:44.611665 systemd-tmpfiles[1241]: Skipping /boot Sep 12 10:16:44.644025 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:16:44.644339 systemd-tmpfiles[1241]: Skipping /boot Sep 12 10:16:44.706702 zram_generator::config[1273]: No configuration found. Sep 12 10:16:44.850370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:16:44.948090 systemd[1]: Reloading finished in 438 ms. Sep 12 10:16:44.964027 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 10:16:44.994870 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:16:45.022542 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:16:45.041257 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 10:16:45.057717 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 10:16:45.079772 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:16:45.099822 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:16:45.120459 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 10:16:45.140545 augenrules[1337]: No rules Sep 12 10:16:45.146352 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:16:45.146727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:16:45.153018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:16:45.170588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:16:45.190212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:16:45.200387 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:16:45.201349 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:16:45.211373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 10:16:45.219660 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Sep 12 10:16:45.221265 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:16:45.225864 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:16:45.226429 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:16:45.237408 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 10:16:45.249773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:16:45.250524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:16:45.260706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:16:45.262042 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:16:45.275340 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 10:16:45.289078 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:16:45.289448 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:16:45.300022 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:16:45.312134 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 10:16:45.350013 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:16:45.352618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:16:45.362314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:16:45.383272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:16:45.405711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:16:45.415390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:16:45.415903 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:16:45.430434 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:16:45.449505 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 10:16:45.459265 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 10:16:45.460030 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:16:45.473751 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:16:45.475267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:16:45.489989 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:16:45.490508 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:16:45.506811 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:16:45.507227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:16:45.521939 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 10:16:45.547242 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 10:16:45.563982 systemd-resolved[1330]: Positive Trust Anchors: Sep 12 10:16:45.564011 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:16:45.564076 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:16:45.605241 systemd[1]: Finished ensure-sysext.service. Sep 12 10:16:45.610297 systemd-resolved[1330]: Defaulting to hostname 'linux'. Sep 12 10:16:45.715742 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:16:45.733389 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Sep 12 10:16:45.733614 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 10:16:45.737840 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:16:45.754156 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 10:16:45.751066 systemd-networkd[1381]: lo: Link UP Sep 12 10:16:45.751083 systemd-networkd[1381]: lo: Gained carrier Sep 12 10:16:45.756484 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 12 10:16:45.758923 systemd-networkd[1381]: Enumeration completed Sep 12 10:16:45.763582 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:16:45.763602 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:16:45.764364 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:16:45.767194 kernel: ACPI: button: Power Button [PWRF] Sep 12 10:16:45.764419 systemd-networkd[1381]: eth0: Link UP Sep 12 10:16:45.764425 systemd-networkd[1381]: eth0: Gained carrier Sep 12 10:16:45.764445 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:16:45.781831 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 12 10:16:45.782127 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 10:16:45.782276 systemd-networkd[1381]: eth0: Overlong DHCP hostname received, shortened from 'ci-4230-2-2-nightly-20250911-2100-377226d477597500f469.c.flatcar-212911.internal' to 'ci-4230-2-2-nightly-20250911-2100-377226d477597500f469' Sep 12 10:16:45.782307 systemd-networkd[1381]: eth0: DHCPv4 address 10.128.0.19/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 12 10:16:45.783678 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:16:45.792422 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:16:45.801548 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:16:45.809471 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:16:45.827421 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:16:45.850680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:16:45.869408 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:16:45.897419 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 10:16:45.906431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:16:45.906515 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:16:45.906619 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 10:16:45.916344 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 10:16:45.916405 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:16:45.917052 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:16:45.929481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:16:45.930139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:16:45.933348 augenrules[1398]: /sbin/augenrules: No change Sep 12 10:16:45.939914 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:16:45.940324 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:16:45.950844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:16:45.952296 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:16:45.964519 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:16:45.964938 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:16:45.966429 augenrules[1421]: No rules Sep 12 10:16:45.986154 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 10:16:45.986671 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1364) Sep 12 10:16:45.996843 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:16:45.997230 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:16:46.016129 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 12 10:16:46.057176 kernel: EDAC MC: Ver: 3.0.0 Sep 12 10:16:46.053915 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 10:16:46.112336 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 12 10:16:46.131125 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 10:16:46.154932 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 10:16:46.181211 systemd[1]: Reached target network.target - Network. Sep 12 10:16:46.199428 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 10:16:46.219762 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:16:46.219477 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 12 10:16:46.236027 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 10:16:46.260531 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 10:16:46.283535 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 10:16:46.283940 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:16:46.284460 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:16:46.288418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:16:46.291165 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 10:16:46.295580 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:16:46.305429 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 10:16:46.311703 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 12 10:16:46.338022 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 10:16:46.352601 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:16:46.372261 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 10:16:46.403352 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 10:16:46.421538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:16:46.433756 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:16:46.444480 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 10:16:46.456437 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 10:16:46.468640 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 10:16:46.478577 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 10:16:46.490392 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 10:16:46.502386 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 10:16:46.502472 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:16:46.511363 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:16:46.522949 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 10:16:46.535624 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 10:16:46.547083 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 10:16:46.558711 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 10:16:46.570415 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 10:16:46.589402 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 10:16:46.601398 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 10:16:46.613471 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 10:16:46.623557 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:16:46.633323 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:16:46.642427 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:16:46.642489 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:16:46.648259 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 10:16:46.672564 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 10:16:46.695667 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 10:16:46.728024 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 10:16:46.746233 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 10:16:46.757128 jq[1470]: false Sep 12 10:16:46.756285 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 10:16:46.766499 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 10:16:46.785313 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 10:16:46.803287 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 10:16:46.814557 coreos-metadata[1468]: Sep 12 10:16:46.814 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 12 10:16:46.815502 coreos-metadata[1468]: Sep 12 10:16:46.815 INFO Fetch successful Sep 12 10:16:46.815502 coreos-metadata[1468]: Sep 12 10:16:46.815 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 12 10:16:46.815918 coreos-metadata[1468]: Sep 12 10:16:46.815 INFO Fetch successful Sep 12 10:16:46.815918 coreos-metadata[1468]: Sep 12 10:16:46.815 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 12 10:16:46.818361 coreos-metadata[1468]: Sep 12 10:16:46.818 INFO Fetch successful Sep 12 10:16:46.818361 coreos-metadata[1468]: Sep 12 10:16:46.818 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 12 10:16:46.820143 coreos-metadata[1468]: Sep 12 10:16:46.818 INFO Fetch successful Sep 12 10:16:46.826505 extend-filesystems[1472]: Found loop4 Sep 12 10:16:46.846437 extend-filesystems[1472]: Found loop5 Sep 12 10:16:46.846437 extend-filesystems[1472]: Found loop6 Sep 12 10:16:46.846437 extend-filesystems[1472]: Found loop7 Sep 12 10:16:46.846437 extend-filesystems[1472]: Found sda Sep 12 10:16:46.846437 extend-filesystems[1472]: Found sda1 Sep 12 10:16:46.846437 extend-filesystems[1472]: Found sda2 Sep 12 10:16:46.846437 extend-filesystems[1472]: Found sda3 Sep 12 10:16:46.846437 extend-filesystems[1472]: Found usr Sep 12 10:16:46.846437 extend-filesystems[1472]: Found sda4 Sep 12 10:16:46.846437 extend-filesystems[1472]: Found sda6 Sep 12 10:16:46.846437 extend-filesystems[1472]: Found sda7 Sep 12 10:16:46.846437 extend-filesystems[1472]: Found sda9 Sep 12 10:16:46.846437 extend-filesystems[1472]: Checking size of /dev/sda9 Sep 12 10:16:46.994474 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 12 10:16:46.994538 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 12 10:16:46.827378 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 10:16:46.994786 extend-filesystems[1472]: Resized partition /dev/sda9 Sep 12 10:16:46.886396 dbus-daemon[1469]: [system] SELinux support is enabled Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 08:14:39 UTC 2025 (1): Starting Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: ---------------------------------------------------- Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: ntp-4 is maintained by Network Time Foundation, Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: corporation. Support and training for ntp-4 are Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: available at https://www.nwtime.org/support Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: ---------------------------------------------------- Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: proto: precision = 0.098 usec (-23) Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: basedate set to 2025-08-31 Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: gps base set to 2025-08-31 (week 2382) Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: Listen normally on 3 eth0 10.128.0.19:123 Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: Listen normally on 4 lo [::1]:123 Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: bind(21) AF_INET6 fe80::4001:aff:fe80:13%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:13%2#123 Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: failed to init interface for address fe80::4001:aff:fe80:13%2 Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: Listening on routing socket on fd #21 for interface updates Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 10:16:47.022664 ntpd[1476]: 12 Sep 10:16:46 ntpd[1476]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 10:16:46.846379 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 10:16:47.025746 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) Sep 12 10:16:47.025746 extend-filesystems[1491]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 12 10:16:47.025746 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 12 10:16:47.025746 extend-filesystems[1491]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 12 10:16:47.102464 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1364) Sep 12 10:16:46.897637 dbus-daemon[1469]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1381 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 10:16:46.867382 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 10:16:47.102965 extend-filesystems[1472]: Resized filesystem in /dev/sda9 Sep 12 10:16:46.945062 ntpd[1476]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 08:14:39 UTC 2025 (1): Starting Sep 12 10:16:46.908947 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 12 10:16:46.946185 ntpd[1476]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 10:16:46.914449 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 10:16:46.946216 ntpd[1476]: ---------------------------------------------------- Sep 12 10:16:46.919436 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 10:16:46.946231 ntpd[1476]: ntp-4 is maintained by Network Time Foundation, Sep 12 10:16:47.128820 update_engine[1495]: I20250912 10:16:47.094811 1495 main.cc:92] Flatcar Update Engine starting Sep 12 10:16:47.128820 update_engine[1495]: I20250912 10:16:47.104082 1495 update_check_scheduler.cc:74] Next update check in 4m33s Sep 12 10:16:46.932272 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 10:16:46.946245 ntpd[1476]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 10:16:47.130548 jq[1496]: true Sep 12 10:16:46.949838 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 10:16:46.946259 ntpd[1476]: corporation. Support and training for ntp-4 are Sep 12 10:16:47.041977 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 10:16:46.946275 ntpd[1476]: available at https://www.nwtime.org/support Sep 12 10:16:47.043477 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 10:16:46.946291 ntpd[1476]: ---------------------------------------------------- Sep 12 10:16:47.044139 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 10:16:46.950728 ntpd[1476]: proto: precision = 0.098 usec (-23) Sep 12 10:16:47.045224 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 10:16:46.951701 ntpd[1476]: basedate set to 2025-08-31 Sep 12 10:16:47.060555 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 10:16:46.951723 ntpd[1476]: gps base set to 2025-08-31 (week 2382) Sep 12 10:16:47.062221 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 10:16:46.955852 ntpd[1476]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 10:16:47.093744 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 10:16:46.955910 ntpd[1476]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 10:16:47.095260 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 10:16:46.957001 ntpd[1476]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 10:16:46.957068 ntpd[1476]: Listen normally on 3 eth0 10.128.0.19:123 Sep 12 10:16:46.958202 ntpd[1476]: Listen normally on 4 lo [::1]:123 Sep 12 10:16:46.958303 ntpd[1476]: bind(21) AF_INET6 fe80::4001:aff:fe80:13%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 10:16:46.958337 ntpd[1476]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:13%2#123 Sep 12 10:16:46.958360 ntpd[1476]: failed to init interface for address fe80::4001:aff:fe80:13%2 Sep 12 10:16:46.958410 ntpd[1476]: Listening on routing socket on fd #21 for interface updates Sep 12 10:16:46.963009 ntpd[1476]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 10:16:46.963047 ntpd[1476]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 10:16:47.142732 systemd-logind[1490]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 10:16:47.142771 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 12 10:16:47.142804 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 10:16:47.143151 systemd-logind[1490]: New seat seat0. Sep 12 10:16:47.146756 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 10:16:47.196172 systemd-networkd[1381]: eth0: Gained IPv6LL Sep 12 10:16:47.208218 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 10:16:47.214657 jq[1506]: true Sep 12 10:16:47.215780 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 10:16:47.221921 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 10:16:47.271024 dbus-daemon[1469]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 10:16:47.361593 systemd[1]: Started update-engine.service - Update Engine. Sep 12 10:16:47.376942 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 10:16:47.394663 tar[1505]: linux-amd64/LICENSE Sep 12 10:16:47.394663 tar[1505]: linux-amd64/helm Sep 12 10:16:47.389086 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 10:16:47.411065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:16:47.433459 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 10:16:47.452673 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 12 10:16:47.461381 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 10:16:47.461919 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 10:16:47.462540 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 10:16:47.487653 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 10:16:47.495831 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 10:16:47.496188 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 10:16:47.507588 init.sh[1540]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 12 10:16:47.514147 init.sh[1540]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 12 10:16:47.514147 init.sh[1540]: + /usr/bin/google_instance_setup Sep 12 10:16:47.517489 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 10:16:47.545377 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Sep 12 10:16:47.545679 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 10:16:47.580592 systemd[1]: Starting sshkeys.service... Sep 12 10:16:47.690269 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 10:16:47.720348 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 10:16:47.733867 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 10:16:47.740224 dbus-daemon[1469]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 10:16:47.754891 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 10:16:47.765209 dbus-daemon[1469]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1542 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 10:16:47.791810 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 10:16:48.029830 polkitd[1563]: Started polkitd version 121 Sep 12 10:16:48.060478 coreos-metadata[1557]: Sep 12 10:16:48.060 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 12 10:16:48.065246 coreos-metadata[1557]: Sep 12 10:16:48.064 INFO Fetch failed with 404: resource not found Sep 12 10:16:48.065246 coreos-metadata[1557]: Sep 12 10:16:48.064 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 12 10:16:48.065810 coreos-metadata[1557]: Sep 12 10:16:48.065 INFO Fetch successful Sep 12 10:16:48.065810 coreos-metadata[1557]: Sep 12 10:16:48.065 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 12 10:16:48.072891 coreos-metadata[1557]: Sep 12 10:16:48.068 INFO Fetch failed with 404: resource not found Sep 12 10:16:48.072891 coreos-metadata[1557]: Sep 12 10:16:48.068 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 12 10:16:48.072891 coreos-metadata[1557]: Sep 12 10:16:48.068 INFO Fetch failed with 404: resource not found Sep 12 10:16:48.072891 coreos-metadata[1557]: Sep 12 10:16:48.068 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 12 10:16:48.075984 coreos-metadata[1557]: Sep 12 10:16:48.074 INFO Fetch successful Sep 12 10:16:48.081770 unknown[1557]: wrote ssh authorized keys file for user: core Sep 12 10:16:48.087315 polkitd[1563]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 10:16:48.087484 polkitd[1563]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 10:16:48.096063 polkitd[1563]: Finished loading, compiling and executing 2 rules Sep 12 10:16:48.103019 dbus-daemon[1469]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 10:16:48.105577 polkitd[1563]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 10:16:48.109687 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 10:16:48.199346 update-ssh-keys[1571]: Updated "/home/core/.ssh/authorized_keys" Sep 12 10:16:48.200398 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 10:16:48.229623 systemd[1]: Finished sshkeys.service. Sep 12 10:16:48.233732 systemd-hostnamed[1542]: Hostname set to (transient) Sep 12 10:16:48.242990 systemd-resolved[1330]: System hostname changed to 'ci-4230-2-2-nightly-20250911-2100-377226d477597500f469'. Sep 12 10:16:48.327040 locksmithd[1543]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 10:16:48.430524 containerd[1513]: time="2025-09-12T10:16:48.427951322Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 10:16:48.575682 containerd[1513]: time="2025-09-12T10:16:48.574452025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:16:48.590999 containerd[1513]: time="2025-09-12T10:16:48.590816705Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.105-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:16:48.590999 containerd[1513]: time="2025-09-12T10:16:48.590907965Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 10:16:48.590999 containerd[1513]: time="2025-09-12T10:16:48.590939291Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 10:16:48.592786 containerd[1513]: time="2025-09-12T10:16:48.592733434Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 10:16:48.592922 containerd[1513]: time="2025-09-12T10:16:48.592791442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 10:16:48.592922 containerd[1513]: time="2025-09-12T10:16:48.592907594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:16:48.593010 containerd[1513]: time="2025-09-12T10:16:48.592931028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:16:48.594062 containerd[1513]: time="2025-09-12T10:16:48.593385525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:16:48.594062 containerd[1513]: time="2025-09-12T10:16:48.593428440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 10:16:48.594062 containerd[1513]: time="2025-09-12T10:16:48.593472399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:16:48.594062 containerd[1513]: time="2025-09-12T10:16:48.593491838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 10:16:48.598127 containerd[1513]: time="2025-09-12T10:16:48.596916921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:16:48.600491 containerd[1513]: time="2025-09-12T10:16:48.600352340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:16:48.605197 containerd[1513]: time="2025-09-12T10:16:48.604430164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:16:48.605197 containerd[1513]: time="2025-09-12T10:16:48.604481810Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 10:16:48.605197 containerd[1513]: time="2025-09-12T10:16:48.604660348Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 10:16:48.605197 containerd[1513]: time="2025-09-12T10:16:48.604744365Z" level=info msg="metadata content store policy set" policy=shared Sep 12 10:16:48.616759 containerd[1513]: time="2025-09-12T10:16:48.616683817Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 10:16:48.616952 containerd[1513]: time="2025-09-12T10:16:48.616806268Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 10:16:48.616952 containerd[1513]: time="2025-09-12T10:16:48.616850065Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 10:16:48.616952 containerd[1513]: time="2025-09-12T10:16:48.616879132Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 10:16:48.616952 containerd[1513]: time="2025-09-12T10:16:48.616906577Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 10:16:48.617591 containerd[1513]: time="2025-09-12T10:16:48.617197462Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 10:16:48.617915 containerd[1513]: time="2025-09-12T10:16:48.617631068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 10:16:48.617915 containerd[1513]: time="2025-09-12T10:16:48.617829590Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 10:16:48.617915 containerd[1513]: time="2025-09-12T10:16:48.617860653Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 10:16:48.617915 containerd[1513]: time="2025-09-12T10:16:48.617885093Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 10:16:48.617915 containerd[1513]: time="2025-09-12T10:16:48.617910425Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.617933781Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.617957229Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.617983722Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.618010070Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.618034243Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.618057599Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.618203909Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.618264967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.618319949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.618357775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.618381821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.618410926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.618434926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.621541 containerd[1513]: time="2025-09-12T10:16:48.618456213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618479681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618507262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618533287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618555260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618576649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618599660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618626020Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618664786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618690444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618710715Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618864028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618906936Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 10:16:48.622143 containerd[1513]: time="2025-09-12T10:16:48.618927540Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 10:16:48.622679 containerd[1513]: time="2025-09-12T10:16:48.618949635Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 10:16:48.622679 containerd[1513]: time="2025-09-12T10:16:48.618967658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.622679 containerd[1513]: time="2025-09-12T10:16:48.618991300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 10:16:48.622679 containerd[1513]: time="2025-09-12T10:16:48.619009793Z" level=info msg="NRI interface is disabled by configuration." Sep 12 10:16:48.622679 containerd[1513]: time="2025-09-12T10:16:48.619028133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 10:16:48.629819 containerd[1513]: time="2025-09-12T10:16:48.621041340Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 10:16:48.629819 containerd[1513]: time="2025-09-12T10:16:48.623539090Z" level=info msg="Connect containerd service" Sep 12 10:16:48.629819 containerd[1513]: time="2025-09-12T10:16:48.623630846Z" level=info msg="using legacy CRI server" Sep 12 10:16:48.629819 containerd[1513]: time="2025-09-12T10:16:48.623645470Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 10:16:48.629819 containerd[1513]: time="2025-09-12T10:16:48.623872419Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 10:16:48.629819 containerd[1513]: time="2025-09-12T10:16:48.629728379Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:16:48.635201 containerd[1513]: time="2025-09-12T10:16:48.633263395Z" level=info msg="Start subscribing containerd event" Sep 12 10:16:48.635201 containerd[1513]: time="2025-09-12T10:16:48.633356565Z" level=info msg="Start recovering state" Sep 12 10:16:48.635201 containerd[1513]: time="2025-09-12T10:16:48.633470775Z" level=info msg="Start event monitor" Sep 12 10:16:48.635201 containerd[1513]: time="2025-09-12T10:16:48.633488448Z" level=info msg="Start snapshots syncer" Sep 12 10:16:48.635201 containerd[1513]: time="2025-09-12T10:16:48.633505504Z" level=info msg="Start cni network conf syncer for default" Sep 12 10:16:48.635201 containerd[1513]: time="2025-09-12T10:16:48.633518882Z" level=info msg="Start streaming server" Sep 12 10:16:48.639130 containerd[1513]: time="2025-09-12T10:16:48.636277302Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 10:16:48.639130 containerd[1513]: time="2025-09-12T10:16:48.636371815Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 10:16:48.636640 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 10:16:48.653523 containerd[1513]: time="2025-09-12T10:16:48.650632296Z" level=info msg="containerd successfully booted in 0.228425s" Sep 12 10:16:49.082411 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 10:16:49.133398 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 10:16:49.154650 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 10:16:49.172667 systemd[1]: Started sshd@0-10.128.0.19:22-80.94.95.115:52000.service - OpenSSH per-connection server daemon (80.94.95.115:52000). Sep 12 10:16:49.201131 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 10:16:49.201683 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 10:16:49.226277 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 10:16:49.308201 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 10:16:49.328821 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 10:16:49.346021 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 10:16:49.356658 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 10:16:49.382299 instance-setup[1544]: INFO Running google_set_multiqueue. Sep 12 10:16:49.417780 instance-setup[1544]: INFO Set channels for eth0 to 2. Sep 12 10:16:49.425534 instance-setup[1544]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 12 10:16:49.428873 instance-setup[1544]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 12 10:16:49.428959 instance-setup[1544]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 12 10:16:49.431940 instance-setup[1544]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 12 10:16:49.432030 instance-setup[1544]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 12 10:16:49.434735 instance-setup[1544]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 12 10:16:49.434789 instance-setup[1544]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 12 10:16:49.436630 instance-setup[1544]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 12 10:16:49.446443 instance-setup[1544]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 12 10:16:49.454644 instance-setup[1544]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 12 10:16:49.455389 instance-setup[1544]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 12 10:16:49.455445 instance-setup[1544]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 12 10:16:49.492890 init.sh[1540]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 12 10:16:49.558164 tar[1505]: linux-amd64/README.md Sep 12 10:16:49.584215 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 10:16:49.728007 startup-script[1632]: INFO Starting startup scripts. Sep 12 10:16:49.734881 startup-script[1632]: INFO No startup scripts found in metadata. Sep 12 10:16:49.734963 startup-script[1632]: INFO Finished running startup scripts. Sep 12 10:16:49.772993 init.sh[1540]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 12 10:16:49.772993 init.sh[1540]: + daemon_pids=() Sep 12 10:16:49.773222 init.sh[1540]: + for d in accounts clock_skew network Sep 12 10:16:49.773773 init.sh[1540]: + daemon_pids+=($!) Sep 12 10:16:49.773773 init.sh[1540]: + for d in accounts clock_skew network Sep 12 10:16:49.773931 init.sh[1638]: + /usr/bin/google_accounts_daemon Sep 12 10:16:49.774359 init.sh[1540]: + daemon_pids+=($!) Sep 12 10:16:49.774359 init.sh[1540]: + for d in accounts clock_skew network Sep 12 10:16:49.774359 init.sh[1540]: + daemon_pids+=($!) Sep 12 10:16:49.774359 init.sh[1540]: + NOTIFY_SOCKET=/run/systemd/notify Sep 12 10:16:49.774359 init.sh[1540]: + /usr/bin/systemd-notify --ready Sep 12 10:16:49.775147 init.sh[1640]: + /usr/bin/google_network_daemon Sep 12 10:16:49.776177 init.sh[1639]: + /usr/bin/google_clock_skew_daemon Sep 12 10:16:49.798816 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 12 10:16:49.816414 init.sh[1540]: + wait -n 1638 1639 1640 Sep 12 10:16:49.946930 ntpd[1476]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:13%2]:123 Sep 12 10:16:49.947729 ntpd[1476]: 12 Sep 10:16:49 ntpd[1476]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:13%2]:123 Sep 12 10:16:50.114454 google-clock-skew[1639]: INFO Starting Google Clock Skew daemon. Sep 12 10:16:50.127321 google-clock-skew[1639]: INFO Clock drift token has changed: 0. Sep 12 10:16:50.219334 google-networking[1640]: INFO Starting Google Networking daemon. Sep 12 10:16:50.274673 groupadd[1650]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 12 10:16:50.279398 groupadd[1650]: group added to /etc/gshadow: name=google-sudoers Sep 12 10:16:50.342224 groupadd[1650]: new group: name=google-sudoers, GID=1000 Sep 12 10:16:50.377931 google-accounts[1638]: INFO Starting Google Accounts daemon. Sep 12 10:16:50.391704 google-accounts[1638]: WARNING OS Login not installed. Sep 12 10:16:50.393220 google-accounts[1638]: INFO Creating a new user account for 0. Sep 12 10:16:50.400223 init.sh[1658]: useradd: invalid user name '0': use --badname to ignore Sep 12 10:16:50.399723 google-accounts[1638]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 12 10:16:50.517390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:16:50.530619 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 10:16:50.535778 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:16:50.540979 systemd[1]: Startup finished in 1.199s (kernel) + 10.083s (initrd) + 10.741s (userspace) = 22.024s. Sep 12 10:16:51.000444 systemd-resolved[1330]: Clock change detected. Flushing caches. Sep 12 10:16:51.000724 google-clock-skew[1639]: INFO Synced system time with hardware clock. Sep 12 10:16:51.236303 sshd[1595]: Connection closed by authenticating user root 80.94.95.115 port 52000 [preauth] Sep 12 10:16:51.240162 systemd[1]: sshd@0-10.128.0.19:22-80.94.95.115:52000.service: Deactivated successfully. Sep 12 10:16:51.438363 kubelet[1665]: E0912 10:16:51.438274 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:16:51.442006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:16:51.442294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:16:51.442876 systemd[1]: kubelet.service: Consumed 1.382s CPU time, 264.9M memory peak. Sep 12 10:16:56.416117 systemd[1]: Started sshd@1-10.128.0.19:22-139.178.89.65:38046.service - OpenSSH per-connection server daemon (139.178.89.65:38046). Sep 12 10:16:56.801363 sshd[1679]: Accepted publickey for core from 139.178.89.65 port 38046 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:16:56.805773 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:16:56.815518 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 10:16:56.821051 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 10:16:56.835026 systemd-logind[1490]: New session 1 of user core. Sep 12 10:16:56.852883 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 10:16:56.865302 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 10:16:56.888563 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 10:16:56.892455 systemd-logind[1490]: New session c1 of user core. Sep 12 10:16:57.086789 systemd[1683]: Queued start job for default target default.target. Sep 12 10:16:57.102472 systemd[1683]: Created slice app.slice - User Application Slice. Sep 12 10:16:57.102527 systemd[1683]: Reached target paths.target - Paths. Sep 12 10:16:57.102784 systemd[1683]: Reached target timers.target - Timers. Sep 12 10:16:57.104774 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 10:16:57.120341 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 10:16:57.120542 systemd[1683]: Reached target sockets.target - Sockets. Sep 12 10:16:57.120621 systemd[1683]: Reached target basic.target - Basic System. Sep 12 10:16:57.121238 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 10:16:57.121459 systemd[1683]: Reached target default.target - Main User Target. Sep 12 10:16:57.121676 systemd[1683]: Startup finished in 218ms. Sep 12 10:16:57.131958 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 10:16:57.428201 systemd[1]: Started sshd@2-10.128.0.19:22-139.178.89.65:38062.service - OpenSSH per-connection server daemon (139.178.89.65:38062). Sep 12 10:16:57.807904 sshd[1694]: Accepted publickey for core from 139.178.89.65 port 38062 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:16:57.809929 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:16:57.816188 systemd-logind[1490]: New session 2 of user core. Sep 12 10:16:57.825975 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 10:16:58.082173 sshd[1696]: Connection closed by 139.178.89.65 port 38062 Sep 12 10:16:58.083677 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Sep 12 10:16:58.089602 systemd[1]: sshd@2-10.128.0.19:22-139.178.89.65:38062.service: Deactivated successfully. Sep 12 10:16:58.092777 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 10:16:58.094872 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. Sep 12 10:16:58.096619 systemd-logind[1490]: Removed session 2. Sep 12 10:16:58.159252 systemd[1]: Started sshd@3-10.128.0.19:22-139.178.89.65:38076.service - OpenSSH per-connection server daemon (139.178.89.65:38076). Sep 12 10:16:58.523926 sshd[1702]: Accepted publickey for core from 139.178.89.65 port 38076 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:16:58.526185 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:16:58.535092 systemd-logind[1490]: New session 3 of user core. Sep 12 10:16:58.541935 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 10:16:58.783046 sshd[1704]: Connection closed by 139.178.89.65 port 38076 Sep 12 10:16:58.784374 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Sep 12 10:16:58.789143 systemd[1]: sshd@3-10.128.0.19:22-139.178.89.65:38076.service: Deactivated successfully. Sep 12 10:16:58.791794 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 10:16:58.793614 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. Sep 12 10:16:58.795216 systemd-logind[1490]: Removed session 3. Sep 12 10:16:58.859120 systemd[1]: Started sshd@4-10.128.0.19:22-139.178.89.65:38090.service - OpenSSH per-connection server daemon (139.178.89.65:38090). Sep 12 10:16:59.248772 sshd[1710]: Accepted publickey for core from 139.178.89.65 port 38090 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:16:59.250802 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:16:59.257791 systemd-logind[1490]: New session 4 of user core. Sep 12 10:16:59.261909 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 10:16:59.523147 sshd[1712]: Connection closed by 139.178.89.65 port 38090 Sep 12 10:16:59.524596 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Sep 12 10:16:59.529084 systemd[1]: sshd@4-10.128.0.19:22-139.178.89.65:38090.service: Deactivated successfully. Sep 12 10:16:59.531855 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 10:16:59.533848 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. Sep 12 10:16:59.535328 systemd-logind[1490]: Removed session 4. Sep 12 10:16:59.605241 systemd[1]: Started sshd@5-10.128.0.19:22-139.178.89.65:38100.service - OpenSSH per-connection server daemon (139.178.89.65:38100). Sep 12 10:16:59.989208 sshd[1718]: Accepted publickey for core from 139.178.89.65 port 38100 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:16:59.991183 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:16:59.997729 systemd-logind[1490]: New session 5 of user core. Sep 12 10:17:00.003944 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 10:17:00.232625 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 10:17:00.233281 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:17:00.252377 sudo[1721]: pam_unix(sudo:session): session closed for user root Sep 12 10:17:00.311252 sshd[1720]: Connection closed by 139.178.89.65 port 38100 Sep 12 10:17:00.312681 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Sep 12 10:17:00.318386 systemd[1]: sshd@5-10.128.0.19:22-139.178.89.65:38100.service: Deactivated successfully. Sep 12 10:17:00.321463 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 10:17:00.323591 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. Sep 12 10:17:00.325187 systemd-logind[1490]: Removed session 5. Sep 12 10:17:00.385121 systemd[1]: Started sshd@6-10.128.0.19:22-139.178.89.65:52502.service - OpenSSH per-connection server daemon (139.178.89.65:52502). Sep 12 10:17:00.781361 sshd[1727]: Accepted publickey for core from 139.178.89.65 port 52502 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:17:00.783392 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:17:00.789903 systemd-logind[1490]: New session 6 of user core. Sep 12 10:17:00.801030 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 10:17:01.009268 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 10:17:01.009839 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:17:01.015827 sudo[1731]: pam_unix(sudo:session): session closed for user root Sep 12 10:17:01.030996 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 10:17:01.031501 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:17:01.051393 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:17:01.094450 augenrules[1753]: No rules Sep 12 10:17:01.095412 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:17:01.095712 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:17:01.097317 sudo[1730]: pam_unix(sudo:session): session closed for user root Sep 12 10:17:01.156172 sshd[1729]: Connection closed by 139.178.89.65 port 52502 Sep 12 10:17:01.157201 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 12 10:17:01.163284 systemd[1]: sshd@6-10.128.0.19:22-139.178.89.65:52502.service: Deactivated successfully. Sep 12 10:17:01.165921 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 10:17:01.167118 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Sep 12 10:17:01.168578 systemd-logind[1490]: Removed session 6. Sep 12 10:17:01.233087 systemd[1]: Started sshd@7-10.128.0.19:22-139.178.89.65:52506.service - OpenSSH per-connection server daemon (139.178.89.65:52506). Sep 12 10:17:01.525049 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 10:17:01.531080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:01.619181 sshd[1762]: Accepted publickey for core from 139.178.89.65 port 52506 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:17:01.621593 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:17:01.630123 systemd-logind[1490]: New session 7 of user core. Sep 12 10:17:01.637028 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 10:17:01.854788 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 10:17:01.855339 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:17:01.857821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:01.872246 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:17:01.973063 kubelet[1774]: E0912 10:17:01.972975 1774 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:17:01.979978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:17:01.980238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:17:01.981132 systemd[1]: kubelet.service: Consumed 248ms CPU time, 111.8M memory peak. Sep 12 10:17:02.426090 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 10:17:02.427746 (dockerd)[1798]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 10:17:02.888574 dockerd[1798]: time="2025-09-12T10:17:02.888475797Z" level=info msg="Starting up" Sep 12 10:17:03.038489 dockerd[1798]: time="2025-09-12T10:17:03.038379407Z" level=info msg="Loading containers: start." Sep 12 10:17:03.272809 kernel: Initializing XFRM netlink socket Sep 12 10:17:03.394902 systemd-networkd[1381]: docker0: Link UP Sep 12 10:17:03.430755 dockerd[1798]: time="2025-09-12T10:17:03.430691915Z" level=info msg="Loading containers: done." Sep 12 10:17:03.451092 dockerd[1798]: time="2025-09-12T10:17:03.451021758Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 10:17:03.451351 dockerd[1798]: time="2025-09-12T10:17:03.451175753Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 10:17:03.451417 dockerd[1798]: time="2025-09-12T10:17:03.451348043Z" level=info msg="Daemon has completed initialization" Sep 12 10:17:03.497686 dockerd[1798]: time="2025-09-12T10:17:03.494927287Z" level=info msg="API listen on /run/docker.sock" Sep 12 10:17:03.496837 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 10:17:04.488780 containerd[1513]: time="2025-09-12T10:17:04.488637506Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 10:17:05.002915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198419793.mount: Deactivated successfully. Sep 12 10:17:06.990590 containerd[1513]: time="2025-09-12T10:17:06.990504300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:06.992376 containerd[1513]: time="2025-09-12T10:17:06.992303518Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28845499" Sep 12 10:17:06.994095 containerd[1513]: time="2025-09-12T10:17:06.993541612Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:06.997262 containerd[1513]: time="2025-09-12T10:17:06.997215883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:06.998951 containerd[1513]: time="2025-09-12T10:17:06.998911447Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.510113693s" Sep 12 10:17:06.999122 containerd[1513]: time="2025-09-12T10:17:06.999094542Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 10:17:07.000616 containerd[1513]: time="2025-09-12T10:17:07.000579876Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 10:17:08.728674 containerd[1513]: time="2025-09-12T10:17:08.728542638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:08.730428 containerd[1513]: time="2025-09-12T10:17:08.730352395Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24788961" Sep 12 10:17:08.732291 containerd[1513]: time="2025-09-12T10:17:08.731562063Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:08.735544 containerd[1513]: time="2025-09-12T10:17:08.735474257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:08.737255 containerd[1513]: time="2025-09-12T10:17:08.737206164Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.736580341s" Sep 12 10:17:08.737383 containerd[1513]: time="2025-09-12T10:17:08.737263316Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 10:17:08.738000 containerd[1513]: time="2025-09-12T10:17:08.737966991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 10:17:10.221200 containerd[1513]: time="2025-09-12T10:17:10.221125057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:10.222908 containerd[1513]: time="2025-09-12T10:17:10.222851440Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19178205" Sep 12 10:17:10.224912 containerd[1513]: time="2025-09-12T10:17:10.224254451Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:10.228029 containerd[1513]: time="2025-09-12T10:17:10.227981301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:10.229660 containerd[1513]: time="2025-09-12T10:17:10.229597944Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.491583701s" Sep 12 10:17:10.229820 containerd[1513]: time="2025-09-12T10:17:10.229793858Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 10:17:10.230813 containerd[1513]: time="2025-09-12T10:17:10.230621046Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 10:17:11.511911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331931751.mount: Deactivated successfully. Sep 12 10:17:12.025816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 10:17:12.035667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:12.412725 containerd[1513]: time="2025-09-12T10:17:12.412612598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:12.415734 containerd[1513]: time="2025-09-12T10:17:12.415607989Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30926101" Sep 12 10:17:12.416807 containerd[1513]: time="2025-09-12T10:17:12.416756680Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:12.419090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:12.423533 containerd[1513]: time="2025-09-12T10:17:12.423458340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:12.425944 containerd[1513]: time="2025-09-12T10:17:12.425893750Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.194886888s" Sep 12 10:17:12.426050 containerd[1513]: time="2025-09-12T10:17:12.425952273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 10:17:12.426886 containerd[1513]: time="2025-09-12T10:17:12.426842143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 10:17:12.430345 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:17:12.494797 kubelet[2068]: E0912 10:17:12.494720 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:17:12.498295 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:17:12.498584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:17:12.499336 systemd[1]: kubelet.service: Consumed 272ms CPU time, 110.4M memory peak. Sep 12 10:17:12.888157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984964846.mount: Deactivated successfully. Sep 12 10:17:14.238574 containerd[1513]: time="2025-09-12T10:17:14.238491879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:14.240397 containerd[1513]: time="2025-09-12T10:17:14.240316365Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Sep 12 10:17:14.242262 containerd[1513]: time="2025-09-12T10:17:14.241680293Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:14.253562 containerd[1513]: time="2025-09-12T10:17:14.253481134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:14.255004 containerd[1513]: time="2025-09-12T10:17:14.254953350Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.82806782s" Sep 12 10:17:14.255204 containerd[1513]: time="2025-09-12T10:17:14.255175953Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 10:17:14.256860 containerd[1513]: time="2025-09-12T10:17:14.256812138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 10:17:14.706798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858858519.mount: Deactivated successfully. Sep 12 10:17:14.719396 containerd[1513]: time="2025-09-12T10:17:14.717622097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:14.719396 containerd[1513]: time="2025-09-12T10:17:14.718333138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 12 10:17:14.719919 containerd[1513]: time="2025-09-12T10:17:14.719885280Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:14.725859 containerd[1513]: time="2025-09-12T10:17:14.723914956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:14.726153 containerd[1513]: time="2025-09-12T10:17:14.725456164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 468.597017ms" Sep 12 10:17:14.726266 containerd[1513]: time="2025-09-12T10:17:14.726166677Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 10:17:14.728552 containerd[1513]: time="2025-09-12T10:17:14.728525012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 10:17:15.248036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037697432.mount: Deactivated successfully. Sep 12 10:17:17.741108 containerd[1513]: time="2025-09-12T10:17:17.741015590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:17.743086 containerd[1513]: time="2025-09-12T10:17:17.742991307Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57689565" Sep 12 10:17:17.745616 containerd[1513]: time="2025-09-12T10:17:17.743931139Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:17.749786 containerd[1513]: time="2025-09-12T10:17:17.749703217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:17.752390 containerd[1513]: time="2025-09-12T10:17:17.751636415Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.022747021s" Sep 12 10:17:17.752390 containerd[1513]: time="2025-09-12T10:17:17.751721462Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 10:17:18.244430 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 10:17:20.766737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:20.767074 systemd[1]: kubelet.service: Consumed 272ms CPU time, 110.4M memory peak. Sep 12 10:17:20.778285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:20.824889 systemd[1]: Reload requested from client PID 2215 ('systemctl') (unit session-7.scope)... Sep 12 10:17:20.824916 systemd[1]: Reloading... Sep 12 10:17:20.984677 zram_generator::config[2260]: No configuration found. Sep 12 10:17:21.183441 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:17:21.345782 systemd[1]: Reloading finished in 520 ms. Sep 12 10:17:21.488436 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 10:17:21.488599 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 10:17:21.489058 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:21.499805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:22.222788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:22.236338 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:17:22.302456 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:17:22.302991 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 10:17:22.302991 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:17:22.303102 kubelet[2308]: I0912 10:17:22.302963 2308 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:17:22.817525 kubelet[2308]: I0912 10:17:22.817451 2308 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 10:17:22.817525 kubelet[2308]: I0912 10:17:22.817498 2308 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:17:22.818059 kubelet[2308]: I0912 10:17:22.818020 2308 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 10:17:22.867586 kubelet[2308]: E0912 10:17:22.867510 2308 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.19:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:22.872595 kubelet[2308]: I0912 10:17:22.872177 2308 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:17:22.882568 kubelet[2308]: E0912 10:17:22.882519 2308 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:17:22.883687 kubelet[2308]: I0912 10:17:22.882808 2308 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:17:22.888384 kubelet[2308]: I0912 10:17:22.888329 2308 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:17:22.891187 kubelet[2308]: I0912 10:17:22.891063 2308 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:17:22.891670 kubelet[2308]: I0912 10:17:22.891172 2308 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:17:22.891670 kubelet[2308]: I0912 10:17:22.891669 2308 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:17:22.891978 kubelet[2308]: I0912 10:17:22.891694 2308 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 10:17:22.891978 kubelet[2308]: I0912 10:17:22.891909 2308 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:17:22.900678 kubelet[2308]: I0912 10:17:22.900607 2308 kubelet.go:446] "Attempting to sync node with API server" Sep 12 10:17:22.900889 kubelet[2308]: I0912 10:17:22.900740 2308 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:17:22.900889 kubelet[2308]: I0912 10:17:22.900793 2308 kubelet.go:352] "Adding apiserver pod source" Sep 12 10:17:22.900889 kubelet[2308]: I0912 10:17:22.900816 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:17:22.911124 kubelet[2308]: W0912 10:17:22.909881 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.19:6443: connect: connection refused Sep 12 10:17:22.911124 kubelet[2308]: E0912 10:17:22.909977 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.19:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:22.911124 kubelet[2308]: W0912 10:17:22.910425 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-nightly-20250911-2100-377226d477597500f469&limit=500&resourceVersion=0": dial tcp 10.128.0.19:6443: connect: connection refused Sep 12 10:17:22.911124 kubelet[2308]: E0912 10:17:22.910503 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-nightly-20250911-2100-377226d477597500f469&limit=500&resourceVersion=0\": dial tcp 10.128.0.19:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:22.911741 kubelet[2308]: I0912 10:17:22.911706 2308 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:17:22.912291 kubelet[2308]: I0912 10:17:22.912254 2308 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 10:17:22.913760 kubelet[2308]: W0912 10:17:22.913704 2308 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 10:17:22.916779 kubelet[2308]: I0912 10:17:22.916741 2308 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 10:17:22.916898 kubelet[2308]: I0912 10:17:22.916811 2308 server.go:1287] "Started kubelet" Sep 12 10:17:22.917085 kubelet[2308]: I0912 10:17:22.917009 2308 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:17:22.918575 kubelet[2308]: I0912 10:17:22.918407 2308 server.go:479] "Adding debug handlers to kubelet server" Sep 12 10:17:22.926961 kubelet[2308]: I0912 10:17:22.926038 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:17:22.926961 kubelet[2308]: I0912 10:17:22.926522 2308 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:17:22.932894 kubelet[2308]: I0912 10:17:22.932107 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:17:22.932894 kubelet[2308]: E0912 10:17:22.930263 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-2-nightly-20250911-2100-377226d477597500f469.18648197a69e39d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-2-nightly-20250911-2100-377226d477597500f469,UID:ci-4230-2-2-nightly-20250911-2100-377226d477597500f469,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-2-nightly-20250911-2100-377226d477597500f469,},FirstTimestamp:2025-09-12 10:17:22.916768212 +0000 UTC m=+0.673828062,LastTimestamp:2025-09-12 10:17:22.916768212 +0000 UTC m=+0.673828062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-2-nightly-20250911-2100-377226d477597500f469,}" Sep 12 10:17:22.938015 kubelet[2308]: I0912 10:17:22.937963 2308 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:17:22.940837 kubelet[2308]: E0912 10:17:22.938827 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" Sep 12 10:17:22.940837 kubelet[2308]: I0912 10:17:22.938904 2308 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 10:17:22.940837 kubelet[2308]: I0912 10:17:22.939194 2308 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 10:17:22.940837 kubelet[2308]: I0912 10:17:22.939311 2308 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:17:22.942331 kubelet[2308]: W0912 10:17:22.942242 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.19:6443: connect: connection refused Sep 12 10:17:22.942609 kubelet[2308]: E0912 10:17:22.942554 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.19:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:22.943026 kubelet[2308]: E0912 10:17:22.942965 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-nightly-20250911-2100-377226d477597500f469?timeout=10s\": dial tcp 10.128.0.19:6443: connect: connection refused" interval="200ms" Sep 12 10:17:22.944400 kubelet[2308]: I0912 10:17:22.944340 2308 factory.go:221] Registration of the systemd container factory successfully Sep 12 10:17:22.944720 kubelet[2308]: I0912 10:17:22.944607 2308 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:17:22.946578 kubelet[2308]: I0912 10:17:22.946554 2308 factory.go:221] Registration of the containerd container factory successfully Sep 12 10:17:22.967715 kubelet[2308]: E0912 10:17:22.967673 2308 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:17:22.978985 kubelet[2308]: I0912 10:17:22.978738 2308 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 10:17:22.978985 kubelet[2308]: I0912 10:17:22.978768 2308 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 10:17:22.978985 kubelet[2308]: I0912 10:17:22.978796 2308 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:17:22.982068 kubelet[2308]: I0912 10:17:22.981721 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 10:17:22.985834 kubelet[2308]: I0912 10:17:22.985802 2308 policy_none.go:49] "None policy: Start" Sep 12 10:17:22.986073 kubelet[2308]: I0912 10:17:22.985841 2308 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 10:17:22.986073 kubelet[2308]: I0912 10:17:22.985877 2308 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:17:22.988209 kubelet[2308]: I0912 10:17:22.987920 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 10:17:22.988209 kubelet[2308]: I0912 10:17:22.987962 2308 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 10:17:22.988209 kubelet[2308]: I0912 10:17:22.988006 2308 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 10:17:22.988209 kubelet[2308]: I0912 10:17:22.988020 2308 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 10:17:22.988209 kubelet[2308]: E0912 10:17:22.988105 2308 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:17:22.990054 kubelet[2308]: W0912 10:17:22.989891 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.19:6443: connect: connection refused Sep 12 10:17:22.990054 kubelet[2308]: E0912 10:17:22.989981 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.19:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:23.000475 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 10:17:23.018868 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 10:17:23.026780 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 10:17:23.039134 kubelet[2308]: E0912 10:17:23.039071 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" Sep 12 10:17:23.039385 kubelet[2308]: I0912 10:17:23.039352 2308 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 10:17:23.040162 kubelet[2308]: I0912 10:17:23.039750 2308 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:17:23.040162 kubelet[2308]: I0912 10:17:23.039775 2308 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:17:23.040360 kubelet[2308]: I0912 10:17:23.040225 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:17:23.042768 kubelet[2308]: E0912 10:17:23.042737 2308 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 10:17:23.042896 kubelet[2308]: E0912 10:17:23.042802 2308 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" Sep 12 10:17:23.116803 systemd[1]: Created slice kubepods-burstable-pod6f25e16f2e47f58ca0ab499749f30930.slice - libcontainer container kubepods-burstable-pod6f25e16f2e47f58ca0ab499749f30930.slice. Sep 12 10:17:23.133509 kubelet[2308]: E0912 10:17:23.133111 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.137390 systemd[1]: Created slice kubepods-burstable-podc9d3431dacd07fb134cffeb1a716f585.slice - libcontainer container kubepods-burstable-podc9d3431dacd07fb134cffeb1a716f585.slice. Sep 12 10:17:23.141349 kubelet[2308]: E0912 10:17:23.141298 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.146146 kubelet[2308]: E0912 10:17:23.145553 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-nightly-20250911-2100-377226d477597500f469?timeout=10s\": dial tcp 10.128.0.19:6443: connect: connection refused" interval="400ms" Sep 12 10:17:23.146146 kubelet[2308]: I0912 10:17:23.145589 2308 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.146146 kubelet[2308]: E0912 10:17:23.146098 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.19:6443/api/v1/nodes\": dial tcp 10.128.0.19:6443: connect: connection refused" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.146809 systemd[1]: Created slice kubepods-burstable-pod85c469f7ed0efb64ffa73f5d65c9f4ea.slice - libcontainer container kubepods-burstable-pod85c469f7ed0efb64ffa73f5d65c9f4ea.slice. Sep 12 10:17:23.150047 kubelet[2308]: E0912 10:17:23.150011 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.240398 kubelet[2308]: I0912 10:17:23.240304 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f25e16f2e47f58ca0ab499749f30930-k8s-certs\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"6f25e16f2e47f58ca0ab499749f30930\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.240693 kubelet[2308]: I0912 10:17:23.240445 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9d3431dacd07fb134cffeb1a716f585-ca-certs\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"c9d3431dacd07fb134cffeb1a716f585\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.240693 kubelet[2308]: I0912 10:17:23.240488 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9d3431dacd07fb134cffeb1a716f585-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"c9d3431dacd07fb134cffeb1a716f585\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.240693 kubelet[2308]: I0912 10:17:23.240518 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9d3431dacd07fb134cffeb1a716f585-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"c9d3431dacd07fb134cffeb1a716f585\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.240693 kubelet[2308]: I0912 10:17:23.240548 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9d3431dacd07fb134cffeb1a716f585-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"c9d3431dacd07fb134cffeb1a716f585\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.240910 kubelet[2308]: I0912 10:17:23.240579 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85c469f7ed0efb64ffa73f5d65c9f4ea-kubeconfig\") pod \"kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"85c469f7ed0efb64ffa73f5d65c9f4ea\") " pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.240910 kubelet[2308]: I0912 10:17:23.240609 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f25e16f2e47f58ca0ab499749f30930-ca-certs\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"6f25e16f2e47f58ca0ab499749f30930\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.240910 kubelet[2308]: I0912 10:17:23.240667 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f25e16f2e47f58ca0ab499749f30930-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"6f25e16f2e47f58ca0ab499749f30930\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.240910 kubelet[2308]: I0912 10:17:23.240701 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9d3431dacd07fb134cffeb1a716f585-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"c9d3431dacd07fb134cffeb1a716f585\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.351951 kubelet[2308]: I0912 10:17:23.351902 2308 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.352846 kubelet[2308]: E0912 10:17:23.352464 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.19:6443/api/v1/nodes\": dial tcp 10.128.0.19:6443: connect: connection refused" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.436113 containerd[1513]: time="2025-09-12T10:17:23.435902812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469,Uid:6f25e16f2e47f58ca0ab499749f30930,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:23.442720 containerd[1513]: time="2025-09-12T10:17:23.442632872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469,Uid:c9d3431dacd07fb134cffeb1a716f585,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:23.454021 containerd[1513]: time="2025-09-12T10:17:23.453959118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469,Uid:85c469f7ed0efb64ffa73f5d65c9f4ea,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:23.546779 kubelet[2308]: E0912 10:17:23.546682 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-nightly-20250911-2100-377226d477597500f469?timeout=10s\": dial tcp 10.128.0.19:6443: connect: connection refused" interval="800ms" Sep 12 10:17:23.760514 kubelet[2308]: I0912 10:17:23.760327 2308 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.761194 kubelet[2308]: E0912 10:17:23.761141 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.19:6443/api/v1/nodes\": dial tcp 10.128.0.19:6443: connect: connection refused" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:23.795061 kubelet[2308]: W0912 10:17:23.794960 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-nightly-20250911-2100-377226d477597500f469&limit=500&resourceVersion=0": dial tcp 10.128.0.19:6443: connect: connection refused Sep 12 10:17:23.795061 kubelet[2308]: E0912 10:17:23.795069 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-2-nightly-20250911-2100-377226d477597500f469&limit=500&resourceVersion=0\": dial tcp 10.128.0.19:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:23.857174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119700358.mount: Deactivated successfully. Sep 12 10:17:23.865972 containerd[1513]: time="2025-09-12T10:17:23.865879635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:17:23.868856 containerd[1513]: time="2025-09-12T10:17:23.868771364Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:17:23.871218 containerd[1513]: time="2025-09-12T10:17:23.871139679Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Sep 12 10:17:23.872172 containerd[1513]: time="2025-09-12T10:17:23.872111959Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:17:23.874748 containerd[1513]: time="2025-09-12T10:17:23.874690817Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:17:23.876561 containerd[1513]: time="2025-09-12T10:17:23.876401387Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:17:23.876561 containerd[1513]: time="2025-09-12T10:17:23.876444709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:17:23.879638 containerd[1513]: time="2025-09-12T10:17:23.879589002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:17:23.882243 containerd[1513]: time="2025-09-12T10:17:23.882199266Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 428.103181ms" Sep 12 10:17:23.885151 containerd[1513]: time="2025-09-12T10:17:23.885090147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 442.310904ms" Sep 12 10:17:23.888661 containerd[1513]: time="2025-09-12T10:17:23.888564682Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 452.499328ms" Sep 12 10:17:23.951409 kubelet[2308]: W0912 10:17:23.951254 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.19:6443: connect: connection refused Sep 12 10:17:23.951409 kubelet[2308]: E0912 10:17:23.951342 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.19:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:24.128888 containerd[1513]: time="2025-09-12T10:17:24.125989988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:24.128888 containerd[1513]: time="2025-09-12T10:17:24.128776872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:24.128888 containerd[1513]: time="2025-09-12T10:17:24.128800294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:24.129310 containerd[1513]: time="2025-09-12T10:17:24.128948234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:24.129491 containerd[1513]: time="2025-09-12T10:17:24.128138761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:24.129491 containerd[1513]: time="2025-09-12T10:17:24.128244067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:24.129491 containerd[1513]: time="2025-09-12T10:17:24.128277347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:24.129491 containerd[1513]: time="2025-09-12T10:17:24.128583237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:24.134159 containerd[1513]: time="2025-09-12T10:17:24.133616342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:24.134159 containerd[1513]: time="2025-09-12T10:17:24.133745451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:24.134159 containerd[1513]: time="2025-09-12T10:17:24.133774428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:24.134159 containerd[1513]: time="2025-09-12T10:17:24.133910836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:24.183497 systemd[1]: Started cri-containerd-8cced1bda3b618a1d9b63a9647c9c5634a4e5402fc633a6f1c96867969961c39.scope - libcontainer container 8cced1bda3b618a1d9b63a9647c9c5634a4e5402fc633a6f1c96867969961c39. Sep 12 10:17:24.200915 systemd[1]: Started cri-containerd-428e50422d0ea5c346bca924a43082d60a51ee597c074471082a3160133d9bf7.scope - libcontainer container 428e50422d0ea5c346bca924a43082d60a51ee597c074471082a3160133d9bf7. Sep 12 10:17:24.202778 systemd[1]: Started cri-containerd-97c1f38b6548d025e4ecdce61c26739357ccdf31a789ed520ed4b6f8d6179d25.scope - libcontainer container 97c1f38b6548d025e4ecdce61c26739357ccdf31a789ed520ed4b6f8d6179d25. Sep 12 10:17:24.313708 containerd[1513]: time="2025-09-12T10:17:24.312787491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469,Uid:6f25e16f2e47f58ca0ab499749f30930,Namespace:kube-system,Attempt:0,} returns sandbox id \"428e50422d0ea5c346bca924a43082d60a51ee597c074471082a3160133d9bf7\"" Sep 12 10:17:24.317073 containerd[1513]: time="2025-09-12T10:17:24.316468066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469,Uid:c9d3431dacd07fb134cffeb1a716f585,Namespace:kube-system,Attempt:0,} returns sandbox id \"97c1f38b6548d025e4ecdce61c26739357ccdf31a789ed520ed4b6f8d6179d25\"" Sep 12 10:17:24.321151 kubelet[2308]: E0912 10:17:24.320330 2308 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d4775975" Sep 12 10:17:24.330305 containerd[1513]: time="2025-09-12T10:17:24.330252054Z" level=info msg="CreateContainer within sandbox \"428e50422d0ea5c346bca924a43082d60a51ee597c074471082a3160133d9bf7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 10:17:24.335685 kubelet[2308]: E0912 10:17:24.334365 2308 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-37722" Sep 12 10:17:24.344663 containerd[1513]: time="2025-09-12T10:17:24.344588768Z" level=info msg="CreateContainer within sandbox \"97c1f38b6548d025e4ecdce61c26739357ccdf31a789ed520ed4b6f8d6179d25\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 10:17:24.348492 kubelet[2308]: E0912 10:17:24.348327 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-2-nightly-20250911-2100-377226d477597500f469?timeout=10s\": dial tcp 10.128.0.19:6443: connect: connection refused" interval="1.6s" Sep 12 10:17:24.350985 containerd[1513]: time="2025-09-12T10:17:24.350893329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469,Uid:85c469f7ed0efb64ffa73f5d65c9f4ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cced1bda3b618a1d9b63a9647c9c5634a4e5402fc633a6f1c96867969961c39\"" Sep 12 10:17:24.355762 kubelet[2308]: E0912 10:17:24.355382 2308 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d4775975" Sep 12 10:17:24.359944 containerd[1513]: time="2025-09-12T10:17:24.358796233Z" level=info msg="CreateContainer within sandbox \"8cced1bda3b618a1d9b63a9647c9c5634a4e5402fc633a6f1c96867969961c39\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 10:17:24.365790 kubelet[2308]: W0912 10:17:24.365569 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.19:6443: connect: connection refused Sep 12 10:17:24.365790 kubelet[2308]: E0912 10:17:24.365690 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.19:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:24.366566 containerd[1513]: time="2025-09-12T10:17:24.366376751Z" level=info msg="CreateContainer within sandbox \"428e50422d0ea5c346bca924a43082d60a51ee597c074471082a3160133d9bf7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f99be66da74f17fa20d5202ba99b1270b19be3c73372a3f0e9d3410ca875e51c\"" Sep 12 10:17:24.367463 containerd[1513]: time="2025-09-12T10:17:24.367425375Z" level=info msg="StartContainer for \"f99be66da74f17fa20d5202ba99b1270b19be3c73372a3f0e9d3410ca875e51c\"" Sep 12 10:17:24.382395 containerd[1513]: time="2025-09-12T10:17:24.382135329Z" level=info msg="CreateContainer within sandbox \"97c1f38b6548d025e4ecdce61c26739357ccdf31a789ed520ed4b6f8d6179d25\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b52a4680764fa6b4fe7f113455e399fe231722cd96539ca6569b3db5970d52dd\"" Sep 12 10:17:24.384717 containerd[1513]: time="2025-09-12T10:17:24.384604966Z" level=info msg="StartContainer for \"b52a4680764fa6b4fe7f113455e399fe231722cd96539ca6569b3db5970d52dd\"" Sep 12 10:17:24.389427 containerd[1513]: time="2025-09-12T10:17:24.389371864Z" level=info msg="CreateContainer within sandbox \"8cced1bda3b618a1d9b63a9647c9c5634a4e5402fc633a6f1c96867969961c39\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3e4095e65e8d217b7345b79eee36a05ac02a708e74d268a4d19c22ab7e1bbc2b\"" Sep 12 10:17:24.392673 containerd[1513]: time="2025-09-12T10:17:24.391533803Z" level=info msg="StartContainer for \"3e4095e65e8d217b7345b79eee36a05ac02a708e74d268a4d19c22ab7e1bbc2b\"" Sep 12 10:17:24.434116 systemd[1]: Started cri-containerd-f99be66da74f17fa20d5202ba99b1270b19be3c73372a3f0e9d3410ca875e51c.scope - libcontainer container f99be66da74f17fa20d5202ba99b1270b19be3c73372a3f0e9d3410ca875e51c. Sep 12 10:17:24.440424 kubelet[2308]: W0912 10:17:24.439991 2308 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.19:6443: connect: connection refused Sep 12 10:17:24.441319 kubelet[2308]: E0912 10:17:24.441237 2308 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.19:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:24.485027 systemd[1]: Started cri-containerd-3e4095e65e8d217b7345b79eee36a05ac02a708e74d268a4d19c22ab7e1bbc2b.scope - libcontainer container 3e4095e65e8d217b7345b79eee36a05ac02a708e74d268a4d19c22ab7e1bbc2b. Sep 12 10:17:24.502965 systemd[1]: Started cri-containerd-b52a4680764fa6b4fe7f113455e399fe231722cd96539ca6569b3db5970d52dd.scope - libcontainer container b52a4680764fa6b4fe7f113455e399fe231722cd96539ca6569b3db5970d52dd. Sep 12 10:17:24.570318 containerd[1513]: time="2025-09-12T10:17:24.569906771Z" level=info msg="StartContainer for \"f99be66da74f17fa20d5202ba99b1270b19be3c73372a3f0e9d3410ca875e51c\" returns successfully" Sep 12 10:17:24.582217 kubelet[2308]: I0912 10:17:24.582161 2308 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:24.583613 kubelet[2308]: E0912 10:17:24.583563 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.19:6443/api/v1/nodes\": dial tcp 10.128.0.19:6443: connect: connection refused" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:24.630537 containerd[1513]: time="2025-09-12T10:17:24.629637804Z" level=info msg="StartContainer for \"b52a4680764fa6b4fe7f113455e399fe231722cd96539ca6569b3db5970d52dd\" returns successfully" Sep 12 10:17:24.665260 containerd[1513]: time="2025-09-12T10:17:24.664477598Z" level=info msg="StartContainer for \"3e4095e65e8d217b7345b79eee36a05ac02a708e74d268a4d19c22ab7e1bbc2b\" returns successfully" Sep 12 10:17:25.010378 kubelet[2308]: E0912 10:17:25.009767 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:25.010378 kubelet[2308]: E0912 10:17:25.009943 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:25.010378 kubelet[2308]: E0912 10:17:25.009760 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:26.015684 kubelet[2308]: E0912 10:17:26.015620 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:26.017363 kubelet[2308]: E0912 10:17:26.017207 2308 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:26.200424 kubelet[2308]: I0912 10:17:26.200374 2308 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:28.425413 kubelet[2308]: E0912 10:17:28.425350 2308 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:28.517955 kubelet[2308]: I0912 10:17:28.517884 2308 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:28.518233 kubelet[2308]: E0912 10:17:28.517968 2308 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\": node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" Sep 12 10:17:28.542371 kubelet[2308]: I0912 10:17:28.541900 2308 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:28.602664 kubelet[2308]: E0912 10:17:28.602583 2308 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:28.602985 kubelet[2308]: I0912 10:17:28.602733 2308 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:28.608326 kubelet[2308]: E0912 10:17:28.607985 2308 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:28.608326 kubelet[2308]: I0912 10:17:28.608036 2308 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:28.613867 kubelet[2308]: E0912 10:17:28.613790 2308 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:28.626681 kubelet[2308]: I0912 10:17:28.624339 2308 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:28.626949 kubelet[2308]: E0912 10:17:28.626860 2308 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:28.907925 kubelet[2308]: I0912 10:17:28.907854 2308 apiserver.go:52] "Watching apiserver" Sep 12 10:17:28.940354 kubelet[2308]: I0912 10:17:28.940277 2308 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 10:17:30.722589 systemd[1]: Reload requested from client PID 2581 ('systemctl') (unit session-7.scope)... Sep 12 10:17:30.722622 systemd[1]: Reloading... Sep 12 10:17:30.963669 zram_generator::config[2627]: No configuration found. Sep 12 10:17:31.109935 kubelet[2308]: I0912 10:17:31.109889 2308 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:31.120455 kubelet[2308]: W0912 10:17:31.120034 2308 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 12 10:17:31.177039 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:17:31.400128 systemd[1]: Reloading finished in 676 ms. Sep 12 10:17:31.443500 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:31.454300 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:17:31.454735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:31.454841 systemd[1]: kubelet.service: Consumed 1.295s CPU time, 134.3M memory peak. Sep 12 10:17:31.466345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:31.835001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:31.848704 (kubelet)[2674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:17:31.935758 kubelet[2674]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:17:31.935758 kubelet[2674]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 10:17:31.935758 kubelet[2674]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:17:31.935758 kubelet[2674]: I0912 10:17:31.935002 2674 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:17:31.948616 kubelet[2674]: I0912 10:17:31.948518 2674 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 10:17:31.948616 kubelet[2674]: I0912 10:17:31.948591 2674 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:17:31.949602 kubelet[2674]: I0912 10:17:31.949331 2674 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 10:17:31.951935 kubelet[2674]: I0912 10:17:31.951874 2674 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 10:17:31.965778 kubelet[2674]: I0912 10:17:31.965600 2674 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:17:31.976582 kubelet[2674]: E0912 10:17:31.974531 2674 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:17:31.976582 kubelet[2674]: I0912 10:17:31.974579 2674 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:17:31.981355 kubelet[2674]: I0912 10:17:31.981320 2674 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:17:31.982078 kubelet[2674]: I0912 10:17:31.982022 2674 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:17:31.982844 kubelet[2674]: I0912 10:17:31.982196 2674 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:17:31.982844 kubelet[2674]: I0912 10:17:31.982776 2674 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:17:31.982844 kubelet[2674]: I0912 10:17:31.982804 2674 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 10:17:31.983145 kubelet[2674]: I0912 10:17:31.982890 2674 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:17:31.983145 kubelet[2674]: I0912 10:17:31.983132 2674 kubelet.go:446] "Attempting to sync node with API server" Sep 12 10:17:31.983243 kubelet[2674]: I0912 10:17:31.983172 2674 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:17:31.983243 kubelet[2674]: I0912 10:17:31.983207 2674 kubelet.go:352] "Adding apiserver pod source" Sep 12 10:17:31.983243 kubelet[2674]: I0912 10:17:31.983227 2674 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:17:31.987448 kubelet[2674]: I0912 10:17:31.987160 2674 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:17:31.989882 kubelet[2674]: I0912 10:17:31.988426 2674 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 10:17:31.992911 kubelet[2674]: I0912 10:17:31.992280 2674 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 10:17:31.993476 kubelet[2674]: I0912 10:17:31.993369 2674 server.go:1287] "Started kubelet" Sep 12 10:17:32.008749 kubelet[2674]: I0912 10:17:32.008109 2674 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:17:32.016674 kubelet[2674]: I0912 10:17:32.014954 2674 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:17:32.026258 kubelet[2674]: I0912 10:17:32.026212 2674 server.go:479] "Adding debug handlers to kubelet server" Sep 12 10:17:32.038110 sudo[2688]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 10:17:32.038920 kubelet[2674]: I0912 10:17:32.038822 2674 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:17:32.039941 kubelet[2674]: I0912 10:17:32.039214 2674 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:17:32.039941 kubelet[2674]: I0912 10:17:32.039901 2674 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:17:32.039490 sudo[2688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 10:17:32.045773 kubelet[2674]: I0912 10:17:32.045433 2674 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 10:17:32.045773 kubelet[2674]: E0912 10:17:32.045685 2674 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" not found" Sep 12 10:17:32.046508 kubelet[2674]: I0912 10:17:32.046476 2674 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 10:17:32.050867 kubelet[2674]: I0912 10:17:32.050836 2674 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:17:32.077240 kubelet[2674]: I0912 10:17:32.076112 2674 factory.go:221] Registration of the systemd container factory successfully Sep 12 10:17:32.077240 kubelet[2674]: I0912 10:17:32.076307 2674 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:17:32.089168 kubelet[2674]: I0912 10:17:32.087880 2674 factory.go:221] Registration of the containerd container factory successfully Sep 12 10:17:32.103932 kubelet[2674]: E0912 10:17:32.103584 2674 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:17:32.107267 kubelet[2674]: I0912 10:17:32.106384 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 10:17:32.109690 kubelet[2674]: I0912 10:17:32.109159 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 10:17:32.109690 kubelet[2674]: I0912 10:17:32.109231 2674 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 10:17:32.109690 kubelet[2674]: I0912 10:17:32.109268 2674 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 10:17:32.109690 kubelet[2674]: I0912 10:17:32.109280 2674 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 10:17:32.109690 kubelet[2674]: E0912 10:17:32.109418 2674 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:17:32.209977 kubelet[2674]: E0912 10:17:32.209901 2674 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 10:17:32.217471 kubelet[2674]: I0912 10:17:32.216770 2674 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 10:17:32.217471 kubelet[2674]: I0912 10:17:32.216836 2674 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 10:17:32.217471 kubelet[2674]: I0912 10:17:32.216899 2674 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:17:32.217471 kubelet[2674]: I0912 10:17:32.217162 2674 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 10:17:32.217471 kubelet[2674]: I0912 10:17:32.217179 2674 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 10:17:32.217471 kubelet[2674]: I0912 10:17:32.217211 2674 policy_none.go:49] "None policy: Start" Sep 12 10:17:32.217471 kubelet[2674]: I0912 10:17:32.217231 2674 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 10:17:32.217471 kubelet[2674]: I0912 10:17:32.217250 2674 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:17:32.217471 kubelet[2674]: I0912 10:17:32.217482 2674 state_mem.go:75] "Updated machine memory state" Sep 12 10:17:32.246364 kubelet[2674]: I0912 10:17:32.246104 2674 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 10:17:32.248398 kubelet[2674]: I0912 10:17:32.247851 2674 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:17:32.248398 kubelet[2674]: I0912 10:17:32.247877 2674 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:17:32.250678 kubelet[2674]: I0912 10:17:32.250295 2674 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:17:32.255552 kubelet[2674]: E0912 10:17:32.255499 2674 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 10:17:32.382433 kubelet[2674]: I0912 10:17:32.381000 2674 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.395364 kubelet[2674]: I0912 10:17:32.394578 2674 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.395364 kubelet[2674]: I0912 10:17:32.394715 2674 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.414963 kubelet[2674]: I0912 10:17:32.411236 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.414963 kubelet[2674]: I0912 10:17:32.411873 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.414963 kubelet[2674]: I0912 10:17:32.412167 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.425507 kubelet[2674]: W0912 10:17:32.425463 2674 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 12 10:17:32.426778 kubelet[2674]: E0912 10:17:32.425565 2674 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" already exists" pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.428728 kubelet[2674]: W0912 10:17:32.428699 2674 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 12 10:17:32.431836 kubelet[2674]: W0912 10:17:32.430759 2674 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 12 10:17:32.455680 kubelet[2674]: I0912 10:17:32.455066 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9d3431dacd07fb134cffeb1a716f585-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"c9d3431dacd07fb134cffeb1a716f585\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.455680 kubelet[2674]: I0912 10:17:32.455146 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9d3431dacd07fb134cffeb1a716f585-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"c9d3431dacd07fb134cffeb1a716f585\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.455680 kubelet[2674]: I0912 10:17:32.455195 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9d3431dacd07fb134cffeb1a716f585-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"c9d3431dacd07fb134cffeb1a716f585\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.455680 kubelet[2674]: I0912 10:17:32.455234 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f25e16f2e47f58ca0ab499749f30930-ca-certs\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"6f25e16f2e47f58ca0ab499749f30930\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.456065 kubelet[2674]: I0912 10:17:32.455272 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f25e16f2e47f58ca0ab499749f30930-k8s-certs\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"6f25e16f2e47f58ca0ab499749f30930\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.456065 kubelet[2674]: I0912 10:17:32.455309 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f25e16f2e47f58ca0ab499749f30930-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"6f25e16f2e47f58ca0ab499749f30930\") " pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.456065 kubelet[2674]: I0912 10:17:32.455340 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9d3431dacd07fb134cffeb1a716f585-ca-certs\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"c9d3431dacd07fb134cffeb1a716f585\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.456065 kubelet[2674]: I0912 10:17:32.455375 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9d3431dacd07fb134cffeb1a716f585-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"c9d3431dacd07fb134cffeb1a716f585\") " pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.456196 kubelet[2674]: I0912 10:17:32.455415 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85c469f7ed0efb64ffa73f5d65c9f4ea-kubeconfig\") pod \"kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" (UID: \"85c469f7ed0efb64ffa73f5d65c9f4ea\") " pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:32.492802 update_engine[1495]: I20250912 10:17:32.492329 1495 update_attempter.cc:509] Updating boot flags... Sep 12 10:17:32.630735 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2720) Sep 12 10:17:32.940891 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2724) Sep 12 10:17:32.987722 kubelet[2674]: I0912 10:17:32.985866 2674 apiserver.go:52] "Watching apiserver" Sep 12 10:17:33.046721 kubelet[2674]: I0912 10:17:33.046674 2674 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 10:17:33.166956 kubelet[2674]: I0912 10:17:33.166908 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:33.182857 kubelet[2674]: W0912 10:17:33.182818 2674 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 12 10:17:33.183597 kubelet[2674]: E0912 10:17:33.183452 2674 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" Sep 12 10:17:33.212697 kubelet[2674]: I0912 10:17:33.211428 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" podStartSLOduration=2.211406021 podStartE2EDuration="2.211406021s" podCreationTimestamp="2025-09-12 10:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:17:33.21103053 +0000 UTC m=+1.350991459" watchObservedRunningTime="2025-09-12 10:17:33.211406021 +0000 UTC m=+1.351366951" Sep 12 10:17:33.266777 kubelet[2674]: I0912 10:17:33.266683 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" podStartSLOduration=1.266635941 podStartE2EDuration="1.266635941s" podCreationTimestamp="2025-09-12 10:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:17:33.236486582 +0000 UTC m=+1.376447511" watchObservedRunningTime="2025-09-12 10:17:33.266635941 +0000 UTC m=+1.406596873" Sep 12 10:17:33.267097 kubelet[2674]: I0912 10:17:33.266826 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" podStartSLOduration=1.2668161119999999 podStartE2EDuration="1.266816112s" podCreationTimestamp="2025-09-12 10:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:17:33.264835205 +0000 UTC m=+1.404796137" watchObservedRunningTime="2025-09-12 10:17:33.266816112 +0000 UTC m=+1.406777044" Sep 12 10:17:33.322377 sudo[2688]: pam_unix(sudo:session): session closed for user root Sep 12 10:17:35.526071 kubelet[2674]: I0912 10:17:35.525828 2674 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 10:17:35.527706 containerd[1513]: time="2025-09-12T10:17:35.527028061Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 10:17:35.530306 kubelet[2674]: I0912 10:17:35.527538 2674 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 10:17:35.713775 sudo[1770]: pam_unix(sudo:session): session closed for user root Sep 12 10:17:35.771678 sshd[1767]: Connection closed by 139.178.89.65 port 52506 Sep 12 10:17:35.772882 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Sep 12 10:17:35.780207 systemd[1]: sshd@7-10.128.0.19:22-139.178.89.65:52506.service: Deactivated successfully. Sep 12 10:17:35.784078 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 10:17:35.784398 systemd[1]: session-7.scope: Consumed 6.475s CPU time, 264M memory peak. Sep 12 10:17:35.786972 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Sep 12 10:17:35.788607 systemd-logind[1490]: Removed session 7. Sep 12 10:17:36.317230 systemd[1]: Created slice kubepods-besteffort-podf260d462_701f_4d92_bed0_fa3d546237f7.slice - libcontainer container kubepods-besteffort-podf260d462_701f_4d92_bed0_fa3d546237f7.slice. Sep 12 10:17:36.344424 systemd[1]: Created slice kubepods-burstable-pod93bc4f6b_ca4e_49df_ad29_3d7d2f89e884.slice - libcontainer container kubepods-burstable-pod93bc4f6b_ca4e_49df_ad29_3d7d2f89e884.slice. Sep 12 10:17:36.390245 kubelet[2674]: I0912 10:17:36.389797 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-bpf-maps\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.390245 kubelet[2674]: I0912 10:17:36.389880 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f260d462-701f-4d92-bed0-fa3d546237f7-lib-modules\") pod \"kube-proxy-g4rqw\" (UID: \"f260d462-701f-4d92-bed0-fa3d546237f7\") " pod="kube-system/kube-proxy-g4rqw" Sep 12 10:17:36.390245 kubelet[2674]: I0912 10:17:36.389937 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-clustermesh-secrets\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.390245 kubelet[2674]: I0912 10:17:36.390014 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-hubble-tls\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.390245 kubelet[2674]: I0912 10:17:36.390048 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f260d462-701f-4d92-bed0-fa3d546237f7-xtables-lock\") pod \"kube-proxy-g4rqw\" (UID: \"f260d462-701f-4d92-bed0-fa3d546237f7\") " pod="kube-system/kube-proxy-g4rqw" Sep 12 10:17:36.390245 kubelet[2674]: I0912 10:17:36.390103 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-cgroup\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.391681 kubelet[2674]: I0912 10:17:36.390171 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-etc-cni-netd\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.391681 kubelet[2674]: I0912 10:17:36.390223 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f260d462-701f-4d92-bed0-fa3d546237f7-kube-proxy\") pod \"kube-proxy-g4rqw\" (UID: \"f260d462-701f-4d92-bed0-fa3d546237f7\") " pod="kube-system/kube-proxy-g4rqw" Sep 12 10:17:36.391681 kubelet[2674]: I0912 10:17:36.390254 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-lib-modules\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.391681 kubelet[2674]: I0912 10:17:36.390280 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-host-proc-sys-net\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.391681 kubelet[2674]: I0912 10:17:36.390308 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hq7d\" (UniqueName: \"kubernetes.io/projected/f260d462-701f-4d92-bed0-fa3d546237f7-kube-api-access-8hq7d\") pod \"kube-proxy-g4rqw\" (UID: \"f260d462-701f-4d92-bed0-fa3d546237f7\") " pod="kube-system/kube-proxy-g4rqw" Sep 12 10:17:36.391681 kubelet[2674]: I0912 10:17:36.390341 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-xtables-lock\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.391989 kubelet[2674]: I0912 10:17:36.390367 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdlt9\" (UniqueName: \"kubernetes.io/projected/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-kube-api-access-gdlt9\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.391989 kubelet[2674]: I0912 10:17:36.390409 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-hostproc\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.391989 kubelet[2674]: I0912 10:17:36.390435 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-host-proc-sys-kernel\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.391989 kubelet[2674]: I0912 10:17:36.390469 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-config-path\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.391989 kubelet[2674]: I0912 10:17:36.390497 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cni-path\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.391989 kubelet[2674]: I0912 10:17:36.390525 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-run\") pod \"cilium-tf6mr\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " pod="kube-system/cilium-tf6mr" Sep 12 10:17:36.639165 systemd[1]: Created slice kubepods-besteffort-podd2af3b83_2edd_44b8_a1b2_4fca01315eff.slice - libcontainer container kubepods-besteffort-podd2af3b83_2edd_44b8_a1b2_4fca01315eff.slice. Sep 12 10:17:36.644702 containerd[1513]: time="2025-09-12T10:17:36.644286853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g4rqw,Uid:f260d462-701f-4d92-bed0-fa3d546237f7,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:36.655404 containerd[1513]: time="2025-09-12T10:17:36.655334861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tf6mr,Uid:93bc4f6b-ca4e-49df-ad29-3d7d2f89e884,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:36.693478 kubelet[2674]: I0912 10:17:36.693206 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rxmh\" (UniqueName: \"kubernetes.io/projected/d2af3b83-2edd-44b8-a1b2-4fca01315eff-kube-api-access-6rxmh\") pod \"cilium-operator-6c4d7847fc-kmgqp\" (UID: \"d2af3b83-2edd-44b8-a1b2-4fca01315eff\") " pod="kube-system/cilium-operator-6c4d7847fc-kmgqp" Sep 12 10:17:36.693478 kubelet[2674]: I0912 10:17:36.693285 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2af3b83-2edd-44b8-a1b2-4fca01315eff-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kmgqp\" (UID: \"d2af3b83-2edd-44b8-a1b2-4fca01315eff\") " pod="kube-system/cilium-operator-6c4d7847fc-kmgqp" Sep 12 10:17:36.728737 containerd[1513]: time="2025-09-12T10:17:36.727523337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:36.731076 containerd[1513]: time="2025-09-12T10:17:36.730737492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:36.731076 containerd[1513]: time="2025-09-12T10:17:36.730783707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:36.731076 containerd[1513]: time="2025-09-12T10:17:36.730950767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:36.734982 containerd[1513]: time="2025-09-12T10:17:36.734855462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:36.735142 containerd[1513]: time="2025-09-12T10:17:36.735046506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:36.735210 containerd[1513]: time="2025-09-12T10:17:36.735142277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:36.735609 containerd[1513]: time="2025-09-12T10:17:36.735521097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:36.773091 systemd[1]: Started cri-containerd-3ca69c2b1cec0153bc804cdbc6a39c39d5b83752f79c385ea0175208551e73cb.scope - libcontainer container 3ca69c2b1cec0153bc804cdbc6a39c39d5b83752f79c385ea0175208551e73cb. Sep 12 10:17:36.776401 systemd[1]: Started cri-containerd-cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532.scope - libcontainer container cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532. Sep 12 10:17:36.850258 containerd[1513]: time="2025-09-12T10:17:36.849806490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tf6mr,Uid:93bc4f6b-ca4e-49df-ad29-3d7d2f89e884,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\"" Sep 12 10:17:36.854443 containerd[1513]: time="2025-09-12T10:17:36.854373767Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 10:17:36.860155 containerd[1513]: time="2025-09-12T10:17:36.860094918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g4rqw,Uid:f260d462-701f-4d92-bed0-fa3d546237f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ca69c2b1cec0153bc804cdbc6a39c39d5b83752f79c385ea0175208551e73cb\"" Sep 12 10:17:36.865258 containerd[1513]: time="2025-09-12T10:17:36.865157513Z" level=info msg="CreateContainer within sandbox \"3ca69c2b1cec0153bc804cdbc6a39c39d5b83752f79c385ea0175208551e73cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 10:17:36.889304 containerd[1513]: time="2025-09-12T10:17:36.889220869Z" level=info msg="CreateContainer within sandbox \"3ca69c2b1cec0153bc804cdbc6a39c39d5b83752f79c385ea0175208551e73cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65efd8124571540cfc994b694cdbbde15240a48b8491b4a3a01dfbb6d86bacdb\"" Sep 12 10:17:36.890885 containerd[1513]: time="2025-09-12T10:17:36.890822661Z" level=info msg="StartContainer for \"65efd8124571540cfc994b694cdbbde15240a48b8491b4a3a01dfbb6d86bacdb\"" Sep 12 10:17:36.939160 systemd[1]: Started cri-containerd-65efd8124571540cfc994b694cdbbde15240a48b8491b4a3a01dfbb6d86bacdb.scope - libcontainer container 65efd8124571540cfc994b694cdbbde15240a48b8491b4a3a01dfbb6d86bacdb. Sep 12 10:17:36.954177 containerd[1513]: time="2025-09-12T10:17:36.951622975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kmgqp,Uid:d2af3b83-2edd-44b8-a1b2-4fca01315eff,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:37.013033 containerd[1513]: time="2025-09-12T10:17:37.012962418Z" level=info msg="StartContainer for \"65efd8124571540cfc994b694cdbbde15240a48b8491b4a3a01dfbb6d86bacdb\" returns successfully" Sep 12 10:17:37.016669 containerd[1513]: time="2025-09-12T10:17:37.016167548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:37.016669 containerd[1513]: time="2025-09-12T10:17:37.016249983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:37.016669 containerd[1513]: time="2025-09-12T10:17:37.016276224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:37.017590 containerd[1513]: time="2025-09-12T10:17:37.017431089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:37.057907 systemd[1]: Started cri-containerd-d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5.scope - libcontainer container d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5. Sep 12 10:17:37.163900 containerd[1513]: time="2025-09-12T10:17:37.163821262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kmgqp,Uid:d2af3b83-2edd-44b8-a1b2-4fca01315eff,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5\"" Sep 12 10:17:38.076053 kubelet[2674]: I0912 10:17:38.074344 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g4rqw" podStartSLOduration=2.074315764 podStartE2EDuration="2.074315764s" podCreationTimestamp="2025-09-12 10:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:17:37.207028933 +0000 UTC m=+5.346989864" watchObservedRunningTime="2025-09-12 10:17:38.074315764 +0000 UTC m=+6.214276695" Sep 12 10:17:42.259582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2227941408.mount: Deactivated successfully. Sep 12 10:17:45.615212 containerd[1513]: time="2025-09-12T10:17:45.615091493Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 10:17:45.618673 containerd[1513]: time="2025-09-12T10:17:45.616810072Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:45.618673 containerd[1513]: time="2025-09-12T10:17:45.618107609Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.763583651s" Sep 12 10:17:45.618673 containerd[1513]: time="2025-09-12T10:17:45.618162282Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 10:17:45.619345 containerd[1513]: time="2025-09-12T10:17:45.619310168Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:45.620186 containerd[1513]: time="2025-09-12T10:17:45.620149112Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 10:17:45.623472 containerd[1513]: time="2025-09-12T10:17:45.623432806Z" level=info msg="CreateContainer within sandbox \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:17:45.649488 containerd[1513]: time="2025-09-12T10:17:45.649403695Z" level=info msg="CreateContainer within sandbox \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66\"" Sep 12 10:17:45.650466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441439432.mount: Deactivated successfully. Sep 12 10:17:45.654764 containerd[1513]: time="2025-09-12T10:17:45.654366380Z" level=info msg="StartContainer for \"dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66\"" Sep 12 10:17:45.717064 systemd[1]: Started cri-containerd-dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66.scope - libcontainer container dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66. Sep 12 10:17:45.767296 containerd[1513]: time="2025-09-12T10:17:45.766987557Z" level=info msg="StartContainer for \"dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66\" returns successfully" Sep 12 10:17:45.785586 systemd[1]: cri-containerd-dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66.scope: Deactivated successfully. Sep 12 10:17:46.641363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66-rootfs.mount: Deactivated successfully. Sep 12 10:17:47.629163 containerd[1513]: time="2025-09-12T10:17:47.629058457Z" level=info msg="shim disconnected" id=dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66 namespace=k8s.io Sep 12 10:17:47.629163 containerd[1513]: time="2025-09-12T10:17:47.629159156Z" level=warning msg="cleaning up after shim disconnected" id=dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66 namespace=k8s.io Sep 12 10:17:47.629987 containerd[1513]: time="2025-09-12T10:17:47.629181595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:17:47.821604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246797139.mount: Deactivated successfully. Sep 12 10:17:48.294446 containerd[1513]: time="2025-09-12T10:17:48.294152855Z" level=info msg="CreateContainer within sandbox \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:17:48.322020 containerd[1513]: time="2025-09-12T10:17:48.321928965Z" level=info msg="CreateContainer within sandbox \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5\"" Sep 12 10:17:48.326816 containerd[1513]: time="2025-09-12T10:17:48.326060946Z" level=info msg="StartContainer for \"148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5\"" Sep 12 10:17:48.424763 systemd[1]: Started cri-containerd-148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5.scope - libcontainer container 148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5. Sep 12 10:17:48.495288 containerd[1513]: time="2025-09-12T10:17:48.495136884Z" level=info msg="StartContainer for \"148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5\" returns successfully" Sep 12 10:17:48.521595 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:17:48.522121 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:17:48.524309 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:17:48.534940 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:17:48.539778 systemd[1]: cri-containerd-148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5.scope: Deactivated successfully. Sep 12 10:17:48.599911 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:17:48.687303 containerd[1513]: time="2025-09-12T10:17:48.687140334Z" level=info msg="shim disconnected" id=148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5 namespace=k8s.io Sep 12 10:17:48.687303 containerd[1513]: time="2025-09-12T10:17:48.687297228Z" level=warning msg="cleaning up after shim disconnected" id=148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5 namespace=k8s.io Sep 12 10:17:48.687303 containerd[1513]: time="2025-09-12T10:17:48.687311899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:17:48.808094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5-rootfs.mount: Deactivated successfully. Sep 12 10:17:49.106071 containerd[1513]: time="2025-09-12T10:17:49.105982227Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:49.107676 containerd[1513]: time="2025-09-12T10:17:49.107341954Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 10:17:49.109249 containerd[1513]: time="2025-09-12T10:17:49.108700485Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:49.111235 containerd[1513]: time="2025-09-12T10:17:49.111193138Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.490989855s" Sep 12 10:17:49.111407 containerd[1513]: time="2025-09-12T10:17:49.111379843Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 10:17:49.115153 containerd[1513]: time="2025-09-12T10:17:49.115112578Z" level=info msg="CreateContainer within sandbox \"d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 10:17:49.136790 containerd[1513]: time="2025-09-12T10:17:49.136729865Z" level=info msg="CreateContainer within sandbox \"d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c\"" Sep 12 10:17:49.138130 containerd[1513]: time="2025-09-12T10:17:49.138090196Z" level=info msg="StartContainer for \"00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c\"" Sep 12 10:17:49.197970 systemd[1]: Started cri-containerd-00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c.scope - libcontainer container 00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c. Sep 12 10:17:49.250451 containerd[1513]: time="2025-09-12T10:17:49.250329375Z" level=info msg="StartContainer for \"00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c\" returns successfully" Sep 12 10:17:49.312472 containerd[1513]: time="2025-09-12T10:17:49.312283085Z" level=info msg="CreateContainer within sandbox \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:17:49.343739 containerd[1513]: time="2025-09-12T10:17:49.343674129Z" level=info msg="CreateContainer within sandbox \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1\"" Sep 12 10:17:49.346077 containerd[1513]: time="2025-09-12T10:17:49.345857000Z" level=info msg="StartContainer for \"09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1\"" Sep 12 10:17:49.422887 kubelet[2674]: I0912 10:17:49.422614 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kmgqp" podStartSLOduration=1.476273271 podStartE2EDuration="13.42258207s" podCreationTimestamp="2025-09-12 10:17:36 +0000 UTC" firstStartedPulling="2025-09-12 10:17:37.166292351 +0000 UTC m=+5.306253260" lastFinishedPulling="2025-09-12 10:17:49.112601138 +0000 UTC m=+17.252562059" observedRunningTime="2025-09-12 10:17:49.327269234 +0000 UTC m=+17.467230165" watchObservedRunningTime="2025-09-12 10:17:49.42258207 +0000 UTC m=+17.562542995" Sep 12 10:17:49.441979 systemd[1]: Started cri-containerd-09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1.scope - libcontainer container 09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1. Sep 12 10:17:49.565287 containerd[1513]: time="2025-09-12T10:17:49.565155162Z" level=info msg="StartContainer for \"09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1\" returns successfully" Sep 12 10:17:49.586802 systemd[1]: cri-containerd-09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1.scope: Deactivated successfully. Sep 12 10:17:49.705947 containerd[1513]: time="2025-09-12T10:17:49.704956073Z" level=info msg="shim disconnected" id=09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1 namespace=k8s.io Sep 12 10:17:49.705947 containerd[1513]: time="2025-09-12T10:17:49.705755889Z" level=warning msg="cleaning up after shim disconnected" id=09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1 namespace=k8s.io Sep 12 10:17:49.705947 containerd[1513]: time="2025-09-12T10:17:49.705775349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:17:50.318635 containerd[1513]: time="2025-09-12T10:17:50.318359247Z" level=info msg="CreateContainer within sandbox \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:17:50.342757 containerd[1513]: time="2025-09-12T10:17:50.342503258Z" level=info msg="CreateContainer within sandbox \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9\"" Sep 12 10:17:50.348097 containerd[1513]: time="2025-09-12T10:17:50.345371979Z" level=info msg="StartContainer for \"76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9\"" Sep 12 10:17:50.455817 systemd[1]: Started cri-containerd-76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9.scope - libcontainer container 76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9. Sep 12 10:17:50.556772 containerd[1513]: time="2025-09-12T10:17:50.555867568Z" level=info msg="StartContainer for \"76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9\" returns successfully" Sep 12 10:17:50.561763 systemd[1]: cri-containerd-76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9.scope: Deactivated successfully. Sep 12 10:17:50.631385 containerd[1513]: time="2025-09-12T10:17:50.631159489Z" level=info msg="shim disconnected" id=76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9 namespace=k8s.io Sep 12 10:17:50.631385 containerd[1513]: time="2025-09-12T10:17:50.631255624Z" level=warning msg="cleaning up after shim disconnected" id=76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9 namespace=k8s.io Sep 12 10:17:50.631385 containerd[1513]: time="2025-09-12T10:17:50.631268812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:17:50.809941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9-rootfs.mount: Deactivated successfully. Sep 12 10:17:51.327876 containerd[1513]: time="2025-09-12T10:17:51.327059774Z" level=info msg="CreateContainer within sandbox \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:17:51.372170 containerd[1513]: time="2025-09-12T10:17:51.372099046Z" level=info msg="CreateContainer within sandbox \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3\"" Sep 12 10:17:51.374939 containerd[1513]: time="2025-09-12T10:17:51.374880445Z" level=info msg="StartContainer for \"ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3\"" Sep 12 10:17:51.437012 systemd[1]: Started cri-containerd-ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3.scope - libcontainer container ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3. Sep 12 10:17:51.489874 containerd[1513]: time="2025-09-12T10:17:51.489781609Z" level=info msg="StartContainer for \"ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3\" returns successfully" Sep 12 10:17:51.689694 kubelet[2674]: I0912 10:17:51.689498 2674 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 10:17:51.750722 systemd[1]: Created slice kubepods-burstable-poda045f66f_96a4_481e_b11e_cb3aa2a2293f.slice - libcontainer container kubepods-burstable-poda045f66f_96a4_481e_b11e_cb3aa2a2293f.slice. Sep 12 10:17:51.772142 systemd[1]: Created slice kubepods-burstable-poddc6e9614_44a3_4937_b394_5e0b593a3aae.slice - libcontainer container kubepods-burstable-poddc6e9614_44a3_4937_b394_5e0b593a3aae.slice. Sep 12 10:17:51.812678 kubelet[2674]: I0912 10:17:51.811036 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a045f66f-96a4-481e-b11e-cb3aa2a2293f-config-volume\") pod \"coredns-668d6bf9bc-5zkch\" (UID: \"a045f66f-96a4-481e-b11e-cb3aa2a2293f\") " pod="kube-system/coredns-668d6bf9bc-5zkch" Sep 12 10:17:51.812678 kubelet[2674]: I0912 10:17:51.811106 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc6e9614-44a3-4937-b394-5e0b593a3aae-config-volume\") pod \"coredns-668d6bf9bc-q8lqx\" (UID: \"dc6e9614-44a3-4937-b394-5e0b593a3aae\") " pod="kube-system/coredns-668d6bf9bc-q8lqx" Sep 12 10:17:51.812678 kubelet[2674]: I0912 10:17:51.811152 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj6pq\" (UniqueName: \"kubernetes.io/projected/dc6e9614-44a3-4937-b394-5e0b593a3aae-kube-api-access-wj6pq\") pod \"coredns-668d6bf9bc-q8lqx\" (UID: \"dc6e9614-44a3-4937-b394-5e0b593a3aae\") " pod="kube-system/coredns-668d6bf9bc-q8lqx" Sep 12 10:17:51.812678 kubelet[2674]: I0912 10:17:51.811192 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8c79\" (UniqueName: \"kubernetes.io/projected/a045f66f-96a4-481e-b11e-cb3aa2a2293f-kube-api-access-p8c79\") pod \"coredns-668d6bf9bc-5zkch\" (UID: \"a045f66f-96a4-481e-b11e-cb3aa2a2293f\") " pod="kube-system/coredns-668d6bf9bc-5zkch" Sep 12 10:17:52.068830 containerd[1513]: time="2025-09-12T10:17:52.068199171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5zkch,Uid:a045f66f-96a4-481e-b11e-cb3aa2a2293f,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:52.086543 containerd[1513]: time="2025-09-12T10:17:52.086485328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8lqx,Uid:dc6e9614-44a3-4937-b394-5e0b593a3aae,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:54.183331 systemd-networkd[1381]: cilium_host: Link UP Sep 12 10:17:54.185362 systemd-networkd[1381]: cilium_net: Link UP Sep 12 10:17:54.185760 systemd-networkd[1381]: cilium_net: Gained carrier Sep 12 10:17:54.186879 systemd-networkd[1381]: cilium_host: Gained carrier Sep 12 10:17:54.365605 systemd-networkd[1381]: cilium_vxlan: Link UP Sep 12 10:17:54.365618 systemd-networkd[1381]: cilium_vxlan: Gained carrier Sep 12 10:17:54.440097 systemd-networkd[1381]: cilium_host: Gained IPv6LL Sep 12 10:17:54.607026 systemd-networkd[1381]: cilium_net: Gained IPv6LL Sep 12 10:17:54.680832 kernel: NET: Registered PF_ALG protocol family Sep 12 10:17:55.681601 systemd-networkd[1381]: lxc_health: Link UP Sep 12 10:17:55.695718 systemd-networkd[1381]: lxc_health: Gained carrier Sep 12 10:17:55.775520 systemd-networkd[1381]: cilium_vxlan: Gained IPv6LL Sep 12 10:17:56.174424 kernel: eth0: renamed from tmp3f225 Sep 12 10:17:56.187903 systemd-networkd[1381]: lxcd47a78337988: Link UP Sep 12 10:17:56.198074 systemd-networkd[1381]: lxcd47a78337988: Gained carrier Sep 12 10:17:56.219858 systemd-networkd[1381]: lxcb3321c60084f: Link UP Sep 12 10:17:56.234251 kernel: eth0: renamed from tmp3af3b Sep 12 10:17:56.248484 systemd-networkd[1381]: lxcb3321c60084f: Gained carrier Sep 12 10:17:56.713098 kubelet[2674]: I0912 10:17:56.712985 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tf6mr" podStartSLOduration=11.945366615 podStartE2EDuration="20.712955287s" podCreationTimestamp="2025-09-12 10:17:36 +0000 UTC" firstStartedPulling="2025-09-12 10:17:36.852024015 +0000 UTC m=+4.991984935" lastFinishedPulling="2025-09-12 10:17:45.61961269 +0000 UTC m=+13.759573607" observedRunningTime="2025-09-12 10:17:52.395961497 +0000 UTC m=+20.535922432" watchObservedRunningTime="2025-09-12 10:17:56.712955287 +0000 UTC m=+24.852916214" Sep 12 10:17:57.054874 systemd-networkd[1381]: lxc_health: Gained IPv6LL Sep 12 10:17:57.631051 systemd-networkd[1381]: lxcb3321c60084f: Gained IPv6LL Sep 12 10:17:57.694909 systemd-networkd[1381]: lxcd47a78337988: Gained IPv6LL Sep 12 10:17:59.922627 ntpd[1476]: Listen normally on 7 cilium_host 192.168.0.234:123 Sep 12 10:17:59.924154 ntpd[1476]: 12 Sep 10:17:59 ntpd[1476]: Listen normally on 7 cilium_host 192.168.0.234:123 Sep 12 10:17:59.924154 ntpd[1476]: 12 Sep 10:17:59 ntpd[1476]: Listen normally on 8 cilium_net [fe80::c435:53ff:fe70:b794%4]:123 Sep 12 10:17:59.924154 ntpd[1476]: 12 Sep 10:17:59 ntpd[1476]: Listen normally on 9 cilium_host [fe80::98b3:acff:fed9:3acb%5]:123 Sep 12 10:17:59.924154 ntpd[1476]: 12 Sep 10:17:59 ntpd[1476]: Listen normally on 10 cilium_vxlan [fe80::bc99:f8ff:fef2:844f%6]:123 Sep 12 10:17:59.924154 ntpd[1476]: 12 Sep 10:17:59 ntpd[1476]: Listen normally on 11 lxc_health [fe80::380d:96ff:fe8a:4f5%8]:123 Sep 12 10:17:59.924154 ntpd[1476]: 12 Sep 10:17:59 ntpd[1476]: Listen normally on 12 lxcd47a78337988 [fe80::34fc:abff:fe64:6215%10]:123 Sep 12 10:17:59.924154 ntpd[1476]: 12 Sep 10:17:59 ntpd[1476]: Listen normally on 13 lxcb3321c60084f [fe80::e8c9:6cff:feea:a271%12]:123 Sep 12 10:17:59.922841 ntpd[1476]: Listen normally on 8 cilium_net [fe80::c435:53ff:fe70:b794%4]:123 Sep 12 10:17:59.922958 ntpd[1476]: Listen normally on 9 cilium_host [fe80::98b3:acff:fed9:3acb%5]:123 Sep 12 10:17:59.923028 ntpd[1476]: Listen normally on 10 cilium_vxlan [fe80::bc99:f8ff:fef2:844f%6]:123 Sep 12 10:17:59.923089 ntpd[1476]: Listen normally on 11 lxc_health [fe80::380d:96ff:fe8a:4f5%8]:123 Sep 12 10:17:59.923165 ntpd[1476]: Listen normally on 12 lxcd47a78337988 [fe80::34fc:abff:fe64:6215%10]:123 Sep 12 10:17:59.923237 ntpd[1476]: Listen normally on 13 lxcb3321c60084f [fe80::e8c9:6cff:feea:a271%12]:123 Sep 12 10:18:01.852470 containerd[1513]: time="2025-09-12T10:18:01.852034760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:18:01.853554 containerd[1513]: time="2025-09-12T10:18:01.853091268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:18:01.854205 containerd[1513]: time="2025-09-12T10:18:01.853765055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:01.856321 containerd[1513]: time="2025-09-12T10:18:01.855808518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:18:01.856321 containerd[1513]: time="2025-09-12T10:18:01.855878079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:18:01.856321 containerd[1513]: time="2025-09-12T10:18:01.855899315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:01.856321 containerd[1513]: time="2025-09-12T10:18:01.856018907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:01.857864 containerd[1513]: time="2025-09-12T10:18:01.856874623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:01.939443 systemd[1]: Started cri-containerd-3af3b92a7d335ebc9731870dc5d45d30c812ac6501c899e4ec99e05a333cb7e8.scope - libcontainer container 3af3b92a7d335ebc9731870dc5d45d30c812ac6501c899e4ec99e05a333cb7e8. Sep 12 10:18:01.955796 systemd[1]: Started cri-containerd-3f2250f3e70046d24b1abc2ca48db387d386e411a90785cbeb79612305930e7a.scope - libcontainer container 3f2250f3e70046d24b1abc2ca48db387d386e411a90785cbeb79612305930e7a. Sep 12 10:18:02.108633 containerd[1513]: time="2025-09-12T10:18:02.107774182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5zkch,Uid:a045f66f-96a4-481e-b11e-cb3aa2a2293f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f2250f3e70046d24b1abc2ca48db387d386e411a90785cbeb79612305930e7a\"" Sep 12 10:18:02.120636 containerd[1513]: time="2025-09-12T10:18:02.120482783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8lqx,Uid:dc6e9614-44a3-4937-b394-5e0b593a3aae,Namespace:kube-system,Attempt:0,} returns sandbox id \"3af3b92a7d335ebc9731870dc5d45d30c812ac6501c899e4ec99e05a333cb7e8\"" Sep 12 10:18:02.143138 containerd[1513]: time="2025-09-12T10:18:02.142804581Z" level=info msg="CreateContainer within sandbox \"3af3b92a7d335ebc9731870dc5d45d30c812ac6501c899e4ec99e05a333cb7e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:18:02.144349 containerd[1513]: time="2025-09-12T10:18:02.144135263Z" level=info msg="CreateContainer within sandbox \"3f2250f3e70046d24b1abc2ca48db387d386e411a90785cbeb79612305930e7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:18:02.181534 containerd[1513]: time="2025-09-12T10:18:02.181489402Z" level=info msg="CreateContainer within sandbox \"3af3b92a7d335ebc9731870dc5d45d30c812ac6501c899e4ec99e05a333cb7e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d88baaec784014babcbb4ab5c15748e9460e62473ca4722aa2af5bec3bd40fe6\"" Sep 12 10:18:02.183870 containerd[1513]: time="2025-09-12T10:18:02.183194180Z" level=info msg="StartContainer for \"d88baaec784014babcbb4ab5c15748e9460e62473ca4722aa2af5bec3bd40fe6\"" Sep 12 10:18:02.187196 containerd[1513]: time="2025-09-12T10:18:02.187152606Z" level=info msg="CreateContainer within sandbox \"3f2250f3e70046d24b1abc2ca48db387d386e411a90785cbeb79612305930e7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71973adac7ea8dcd648c1489a030efa7b279073b6dfa4f7392c2ea012960bc34\"" Sep 12 10:18:02.188351 containerd[1513]: time="2025-09-12T10:18:02.188319150Z" level=info msg="StartContainer for \"71973adac7ea8dcd648c1489a030efa7b279073b6dfa4f7392c2ea012960bc34\"" Sep 12 10:18:02.253276 systemd[1]: Started cri-containerd-d88baaec784014babcbb4ab5c15748e9460e62473ca4722aa2af5bec3bd40fe6.scope - libcontainer container d88baaec784014babcbb4ab5c15748e9460e62473ca4722aa2af5bec3bd40fe6. Sep 12 10:18:02.264929 systemd[1]: Started cri-containerd-71973adac7ea8dcd648c1489a030efa7b279073b6dfa4f7392c2ea012960bc34.scope - libcontainer container 71973adac7ea8dcd648c1489a030efa7b279073b6dfa4f7392c2ea012960bc34. Sep 12 10:18:02.337061 containerd[1513]: time="2025-09-12T10:18:02.336870426Z" level=info msg="StartContainer for \"d88baaec784014babcbb4ab5c15748e9460e62473ca4722aa2af5bec3bd40fe6\" returns successfully" Sep 12 10:18:02.369933 containerd[1513]: time="2025-09-12T10:18:02.368965881Z" level=info msg="StartContainer for \"71973adac7ea8dcd648c1489a030efa7b279073b6dfa4f7392c2ea012960bc34\" returns successfully" Sep 12 10:18:02.412186 kubelet[2674]: I0912 10:18:02.412067 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q8lqx" podStartSLOduration=26.412038032 podStartE2EDuration="26.412038032s" podCreationTimestamp="2025-09-12 10:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:18:02.410233319 +0000 UTC m=+30.550194277" watchObservedRunningTime="2025-09-12 10:18:02.412038032 +0000 UTC m=+30.551998961" Sep 12 10:18:02.471785 kubelet[2674]: I0912 10:18:02.471604 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5zkch" podStartSLOduration=26.471574523 podStartE2EDuration="26.471574523s" podCreationTimestamp="2025-09-12 10:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:18:02.465148542 +0000 UTC m=+30.605109472" watchObservedRunningTime="2025-09-12 10:18:02.471574523 +0000 UTC m=+30.611535452" Sep 12 10:18:34.969149 systemd[1]: Started sshd@8-10.128.0.19:22-139.178.89.65:60262.service - OpenSSH per-connection server daemon (139.178.89.65:60262). Sep 12 10:18:35.358781 sshd[4068]: Accepted publickey for core from 139.178.89.65 port 60262 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:18:35.359840 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:35.366489 systemd-logind[1490]: New session 8 of user core. Sep 12 10:18:35.374972 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 10:18:35.764548 sshd[4070]: Connection closed by 139.178.89.65 port 60262 Sep 12 10:18:35.765976 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:35.774338 systemd[1]: sshd@8-10.128.0.19:22-139.178.89.65:60262.service: Deactivated successfully. Sep 12 10:18:35.777597 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 10:18:35.779491 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Sep 12 10:18:35.782257 systemd-logind[1490]: Removed session 8. Sep 12 10:18:40.847370 systemd[1]: Started sshd@9-10.128.0.19:22-139.178.89.65:33148.service - OpenSSH per-connection server daemon (139.178.89.65:33148). Sep 12 10:18:41.230564 sshd[4084]: Accepted publickey for core from 139.178.89.65 port 33148 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:18:41.233196 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:41.241311 systemd-logind[1490]: New session 9 of user core. Sep 12 10:18:41.246981 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 10:18:41.597505 sshd[4086]: Connection closed by 139.178.89.65 port 33148 Sep 12 10:18:41.598705 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:41.605963 systemd[1]: sshd@9-10.128.0.19:22-139.178.89.65:33148.service: Deactivated successfully. Sep 12 10:18:41.609193 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 10:18:41.610890 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Sep 12 10:18:41.613173 systemd-logind[1490]: Removed session 9. Sep 12 10:18:46.672312 systemd[1]: Started sshd@10-10.128.0.19:22-139.178.89.65:33162.service - OpenSSH per-connection server daemon (139.178.89.65:33162). Sep 12 10:18:47.069868 sshd[4099]: Accepted publickey for core from 139.178.89.65 port 33162 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:18:47.072142 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:47.078803 systemd-logind[1490]: New session 10 of user core. Sep 12 10:18:47.087979 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 10:18:47.439155 sshd[4101]: Connection closed by 139.178.89.65 port 33162 Sep 12 10:18:47.440791 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:47.448295 systemd[1]: sshd@10-10.128.0.19:22-139.178.89.65:33162.service: Deactivated successfully. Sep 12 10:18:47.451720 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 10:18:47.452861 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Sep 12 10:18:47.454925 systemd-logind[1490]: Removed session 10. Sep 12 10:18:52.516188 systemd[1]: Started sshd@11-10.128.0.19:22-139.178.89.65:34912.service - OpenSSH per-connection server daemon (139.178.89.65:34912). Sep 12 10:18:52.911145 sshd[4114]: Accepted publickey for core from 139.178.89.65 port 34912 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:18:52.913211 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:52.919709 systemd-logind[1490]: New session 11 of user core. Sep 12 10:18:52.927037 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 10:18:53.288064 sshd[4116]: Connection closed by 139.178.89.65 port 34912 Sep 12 10:18:53.290365 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:53.299056 systemd[1]: sshd@11-10.128.0.19:22-139.178.89.65:34912.service: Deactivated successfully. Sep 12 10:18:53.303746 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 10:18:53.305058 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Sep 12 10:18:53.307224 systemd-logind[1490]: Removed session 11. Sep 12 10:18:58.367305 systemd[1]: Started sshd@12-10.128.0.19:22-139.178.89.65:34914.service - OpenSSH per-connection server daemon (139.178.89.65:34914). Sep 12 10:18:58.765843 sshd[4131]: Accepted publickey for core from 139.178.89.65 port 34914 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:18:58.766695 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:58.774208 systemd-logind[1490]: New session 12 of user core. Sep 12 10:18:58.780967 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 10:18:59.132225 sshd[4133]: Connection closed by 139.178.89.65 port 34914 Sep 12 10:18:59.133435 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:59.140163 systemd[1]: sshd@12-10.128.0.19:22-139.178.89.65:34914.service: Deactivated successfully. Sep 12 10:18:59.144238 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 10:18:59.145551 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. Sep 12 10:18:59.147603 systemd-logind[1490]: Removed session 12. Sep 12 10:18:59.209469 systemd[1]: Started sshd@13-10.128.0.19:22-139.178.89.65:34928.service - OpenSSH per-connection server daemon (139.178.89.65:34928). Sep 12 10:18:59.600415 sshd[4146]: Accepted publickey for core from 139.178.89.65 port 34928 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:18:59.602680 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:59.609935 systemd-logind[1490]: New session 13 of user core. Sep 12 10:18:59.621939 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 10:19:00.023901 sshd[4148]: Connection closed by 139.178.89.65 port 34928 Sep 12 10:19:00.026362 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:00.032488 systemd[1]: sshd@13-10.128.0.19:22-139.178.89.65:34928.service: Deactivated successfully. Sep 12 10:19:00.036562 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 10:19:00.037949 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. Sep 12 10:19:00.039665 systemd-logind[1490]: Removed session 13. Sep 12 10:19:00.102446 systemd[1]: Started sshd@14-10.128.0.19:22-139.178.89.65:57638.service - OpenSSH per-connection server daemon (139.178.89.65:57638). Sep 12 10:19:00.493391 sshd[4158]: Accepted publickey for core from 139.178.89.65 port 57638 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:00.496168 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:00.502815 systemd-logind[1490]: New session 14 of user core. Sep 12 10:19:00.510890 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 10:19:00.856610 sshd[4161]: Connection closed by 139.178.89.65 port 57638 Sep 12 10:19:00.857770 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:00.864843 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. Sep 12 10:19:00.865405 systemd[1]: sshd@14-10.128.0.19:22-139.178.89.65:57638.service: Deactivated successfully. Sep 12 10:19:00.868925 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 10:19:00.870877 systemd-logind[1490]: Removed session 14. Sep 12 10:19:05.936216 systemd[1]: Started sshd@15-10.128.0.19:22-139.178.89.65:57654.service - OpenSSH per-connection server daemon (139.178.89.65:57654). Sep 12 10:19:06.329393 sshd[4173]: Accepted publickey for core from 139.178.89.65 port 57654 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:06.331852 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:06.339381 systemd-logind[1490]: New session 15 of user core. Sep 12 10:19:06.345880 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 10:19:06.687472 sshd[4175]: Connection closed by 139.178.89.65 port 57654 Sep 12 10:19:06.689091 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:06.694129 systemd[1]: sshd@15-10.128.0.19:22-139.178.89.65:57654.service: Deactivated successfully. Sep 12 10:19:06.697293 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 10:19:06.700113 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. Sep 12 10:19:06.702221 systemd-logind[1490]: Removed session 15. Sep 12 10:19:11.769077 systemd[1]: Started sshd@16-10.128.0.19:22-139.178.89.65:39424.service - OpenSSH per-connection server daemon (139.178.89.65:39424). Sep 12 10:19:12.153173 sshd[4189]: Accepted publickey for core from 139.178.89.65 port 39424 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:12.155380 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:12.162854 systemd-logind[1490]: New session 16 of user core. Sep 12 10:19:12.169885 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 10:19:12.515813 sshd[4191]: Connection closed by 139.178.89.65 port 39424 Sep 12 10:19:12.517325 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:12.523526 systemd[1]: sshd@16-10.128.0.19:22-139.178.89.65:39424.service: Deactivated successfully. Sep 12 10:19:12.526811 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 10:19:12.528132 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. Sep 12 10:19:12.529898 systemd-logind[1490]: Removed session 16. Sep 12 10:19:12.583078 systemd[1]: Started sshd@17-10.128.0.19:22-139.178.89.65:39428.service - OpenSSH per-connection server daemon (139.178.89.65:39428). Sep 12 10:19:12.959592 sshd[4203]: Accepted publickey for core from 139.178.89.65 port 39428 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:12.961605 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:12.968363 systemd-logind[1490]: New session 17 of user core. Sep 12 10:19:12.971881 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 10:19:13.371802 sshd[4205]: Connection closed by 139.178.89.65 port 39428 Sep 12 10:19:13.372939 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:13.379387 systemd[1]: sshd@17-10.128.0.19:22-139.178.89.65:39428.service: Deactivated successfully. Sep 12 10:19:13.383170 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 10:19:13.384399 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. Sep 12 10:19:13.386058 systemd-logind[1490]: Removed session 17. Sep 12 10:19:13.449123 systemd[1]: Started sshd@18-10.128.0.19:22-139.178.89.65:39434.service - OpenSSH per-connection server daemon (139.178.89.65:39434). Sep 12 10:19:13.839304 sshd[4215]: Accepted publickey for core from 139.178.89.65 port 39434 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:13.841760 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:13.850317 systemd-logind[1490]: New session 18 of user core. Sep 12 10:19:13.859885 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 10:19:14.829436 sshd[4217]: Connection closed by 139.178.89.65 port 39434 Sep 12 10:19:14.830567 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:14.836850 systemd[1]: sshd@18-10.128.0.19:22-139.178.89.65:39434.service: Deactivated successfully. Sep 12 10:19:14.842243 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 10:19:14.843864 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. Sep 12 10:19:14.845680 systemd-logind[1490]: Removed session 18. Sep 12 10:19:14.908098 systemd[1]: Started sshd@19-10.128.0.19:22-139.178.89.65:39438.service - OpenSSH per-connection server daemon (139.178.89.65:39438). Sep 12 10:19:15.303639 sshd[4234]: Accepted publickey for core from 139.178.89.65 port 39438 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:15.306675 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:15.313718 systemd-logind[1490]: New session 19 of user core. Sep 12 10:19:15.324009 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 10:19:15.805231 sshd[4236]: Connection closed by 139.178.89.65 port 39438 Sep 12 10:19:15.806353 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:15.811723 systemd[1]: sshd@19-10.128.0.19:22-139.178.89.65:39438.service: Deactivated successfully. Sep 12 10:19:15.816042 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 10:19:15.818805 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. Sep 12 10:19:15.820359 systemd-logind[1490]: Removed session 19. Sep 12 10:19:15.879395 systemd[1]: Started sshd@20-10.128.0.19:22-139.178.89.65:39454.service - OpenSSH per-connection server daemon (139.178.89.65:39454). Sep 12 10:19:16.270557 sshd[4246]: Accepted publickey for core from 139.178.89.65 port 39454 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:16.272828 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:16.279338 systemd-logind[1490]: New session 20 of user core. Sep 12 10:19:16.288900 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 10:19:16.626472 sshd[4248]: Connection closed by 139.178.89.65 port 39454 Sep 12 10:19:16.627580 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:16.634319 systemd[1]: sshd@20-10.128.0.19:22-139.178.89.65:39454.service: Deactivated successfully. Sep 12 10:19:16.638921 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 10:19:16.640128 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. Sep 12 10:19:16.641772 systemd-logind[1490]: Removed session 20. Sep 12 10:19:21.701200 systemd[1]: Started sshd@21-10.128.0.19:22-139.178.89.65:42518.service - OpenSSH per-connection server daemon (139.178.89.65:42518). Sep 12 10:19:22.067206 sshd[4262]: Accepted publickey for core from 139.178.89.65 port 42518 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:22.070186 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:22.079670 systemd-logind[1490]: New session 21 of user core. Sep 12 10:19:22.085109 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 10:19:22.413018 sshd[4264]: Connection closed by 139.178.89.65 port 42518 Sep 12 10:19:22.414515 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:22.419974 systemd[1]: sshd@21-10.128.0.19:22-139.178.89.65:42518.service: Deactivated successfully. Sep 12 10:19:22.423362 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 10:19:22.426257 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. Sep 12 10:19:22.427817 systemd-logind[1490]: Removed session 21. Sep 12 10:19:27.492189 systemd[1]: Started sshd@22-10.128.0.19:22-139.178.89.65:42524.service - OpenSSH per-connection server daemon (139.178.89.65:42524). Sep 12 10:19:27.888206 sshd[4276]: Accepted publickey for core from 139.178.89.65 port 42524 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:27.889572 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:27.896213 systemd-logind[1490]: New session 22 of user core. Sep 12 10:19:27.904922 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 10:19:28.252951 sshd[4278]: Connection closed by 139.178.89.65 port 42524 Sep 12 10:19:28.254182 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:28.260886 systemd[1]: sshd@22-10.128.0.19:22-139.178.89.65:42524.service: Deactivated successfully. Sep 12 10:19:28.264491 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 10:19:28.266190 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. Sep 12 10:19:28.267766 systemd-logind[1490]: Removed session 22. Sep 12 10:19:33.333431 systemd[1]: Started sshd@23-10.128.0.19:22-139.178.89.65:53042.service - OpenSSH per-connection server daemon (139.178.89.65:53042). Sep 12 10:19:33.721750 sshd[4292]: Accepted publickey for core from 139.178.89.65 port 53042 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:33.724224 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:33.730578 systemd-logind[1490]: New session 23 of user core. Sep 12 10:19:33.734949 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 10:19:34.086764 sshd[4294]: Connection closed by 139.178.89.65 port 53042 Sep 12 10:19:34.087881 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:34.094805 systemd[1]: sshd@23-10.128.0.19:22-139.178.89.65:53042.service: Deactivated successfully. Sep 12 10:19:34.099615 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 10:19:34.101367 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. Sep 12 10:19:34.102880 systemd-logind[1490]: Removed session 23. Sep 12 10:19:34.168140 systemd[1]: Started sshd@24-10.128.0.19:22-139.178.89.65:53058.service - OpenSSH per-connection server daemon (139.178.89.65:53058). Sep 12 10:19:34.554743 sshd[4305]: Accepted publickey for core from 139.178.89.65 port 53058 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:34.556849 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:34.563670 systemd-logind[1490]: New session 24 of user core. Sep 12 10:19:34.572071 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 10:19:36.310675 containerd[1513]: time="2025-09-12T10:19:36.310230435Z" level=info msg="StopContainer for \"00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c\" with timeout 30 (s)" Sep 12 10:19:36.315335 containerd[1513]: time="2025-09-12T10:19:36.314636035Z" level=info msg="Stop container \"00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c\" with signal terminated" Sep 12 10:19:36.343516 systemd[1]: run-containerd-runc-k8s.io-ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3-runc.zayWMD.mount: Deactivated successfully. Sep 12 10:19:36.358699 systemd[1]: cri-containerd-00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c.scope: Deactivated successfully. Sep 12 10:19:36.388141 containerd[1513]: time="2025-09-12T10:19:36.388062454Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:19:36.404563 containerd[1513]: time="2025-09-12T10:19:36.404339259Z" level=info msg="StopContainer for \"ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3\" with timeout 2 (s)" Sep 12 10:19:36.405209 containerd[1513]: time="2025-09-12T10:19:36.405020736Z" level=info msg="Stop container \"ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3\" with signal terminated" Sep 12 10:19:36.420311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c-rootfs.mount: Deactivated successfully. Sep 12 10:19:36.427991 systemd-networkd[1381]: lxc_health: Link DOWN Sep 12 10:19:36.428007 systemd-networkd[1381]: lxc_health: Lost carrier Sep 12 10:19:36.450441 containerd[1513]: time="2025-09-12T10:19:36.450337838Z" level=info msg="shim disconnected" id=00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c namespace=k8s.io Sep 12 10:19:36.450441 containerd[1513]: time="2025-09-12T10:19:36.450435581Z" level=warning msg="cleaning up after shim disconnected" id=00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c namespace=k8s.io Sep 12 10:19:36.450441 containerd[1513]: time="2025-09-12T10:19:36.450455012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:36.461216 systemd[1]: cri-containerd-ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3.scope: Deactivated successfully. Sep 12 10:19:36.462381 systemd[1]: cri-containerd-ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3.scope: Consumed 10.571s CPU time, 124.6M memory peak, 144K read from disk, 13.3M written to disk. Sep 12 10:19:36.495857 containerd[1513]: time="2025-09-12T10:19:36.495450161Z" level=info msg="StopContainer for \"00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c\" returns successfully" Sep 12 10:19:36.497729 containerd[1513]: time="2025-09-12T10:19:36.497058259Z" level=info msg="StopPodSandbox for \"d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5\"" Sep 12 10:19:36.497729 containerd[1513]: time="2025-09-12T10:19:36.497123321Z" level=info msg="Container to stop \"00d4080a5af85c1f7c989c9a6ca2ffb44fbd0ddb891a955fc49e58f2a3a79a5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:36.504631 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5-shm.mount: Deactivated successfully. Sep 12 10:19:36.517499 systemd[1]: cri-containerd-d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5.scope: Deactivated successfully. Sep 12 10:19:36.537062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3-rootfs.mount: Deactivated successfully. Sep 12 10:19:36.547225 containerd[1513]: time="2025-09-12T10:19:36.547095550Z" level=info msg="shim disconnected" id=ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3 namespace=k8s.io Sep 12 10:19:36.547225 containerd[1513]: time="2025-09-12T10:19:36.547181050Z" level=warning msg="cleaning up after shim disconnected" id=ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3 namespace=k8s.io Sep 12 10:19:36.547225 containerd[1513]: time="2025-09-12T10:19:36.547195845Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:36.591304 containerd[1513]: time="2025-09-12T10:19:36.589198838Z" level=info msg="StopContainer for \"ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3\" returns successfully" Sep 12 10:19:36.591304 containerd[1513]: time="2025-09-12T10:19:36.590337277Z" level=info msg="shim disconnected" id=d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5 namespace=k8s.io Sep 12 10:19:36.591304 containerd[1513]: time="2025-09-12T10:19:36.590417221Z" level=warning msg="cleaning up after shim disconnected" id=d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5 namespace=k8s.io Sep 12 10:19:36.591304 containerd[1513]: time="2025-09-12T10:19:36.590436839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:36.591304 containerd[1513]: time="2025-09-12T10:19:36.591228728Z" level=info msg="StopPodSandbox for \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\"" Sep 12 10:19:36.593293 containerd[1513]: time="2025-09-12T10:19:36.591772769Z" level=info msg="Container to stop \"09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:36.593293 containerd[1513]: time="2025-09-12T10:19:36.591848808Z" level=info msg="Container to stop \"dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:36.593293 containerd[1513]: time="2025-09-12T10:19:36.591866298Z" level=info msg="Container to stop \"148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:36.593293 containerd[1513]: time="2025-09-12T10:19:36.591882819Z" level=info msg="Container to stop \"76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:36.593293 containerd[1513]: time="2025-09-12T10:19:36.591899352Z" level=info msg="Container to stop \"ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:36.615631 systemd[1]: cri-containerd-cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532.scope: Deactivated successfully. Sep 12 10:19:36.638640 containerd[1513]: time="2025-09-12T10:19:36.638545877Z" level=info msg="TearDown network for sandbox \"d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5\" successfully" Sep 12 10:19:36.639907 containerd[1513]: time="2025-09-12T10:19:36.638633624Z" level=info msg="StopPodSandbox for \"d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5\" returns successfully" Sep 12 10:19:36.646408 kubelet[2674]: I0912 10:19:36.646346 2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5" Sep 12 10:19:36.683325 containerd[1513]: time="2025-09-12T10:19:36.683218893Z" level=info msg="shim disconnected" id=cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532 namespace=k8s.io Sep 12 10:19:36.683758 containerd[1513]: time="2025-09-12T10:19:36.683333176Z" level=warning msg="cleaning up after shim disconnected" id=cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532 namespace=k8s.io Sep 12 10:19:36.683758 containerd[1513]: time="2025-09-12T10:19:36.683348815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:36.708111 containerd[1513]: time="2025-09-12T10:19:36.707956043Z" level=info msg="TearDown network for sandbox \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" successfully" Sep 12 10:19:36.708111 containerd[1513]: time="2025-09-12T10:19:36.708014499Z" level=info msg="StopPodSandbox for \"cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532\" returns successfully" Sep 12 10:19:36.820787 kubelet[2674]: I0912 10:19:36.819109 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-etc-cni-netd\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.820787 kubelet[2674]: I0912 10:19:36.819204 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-run\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.820787 kubelet[2674]: I0912 10:19:36.819248 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-config-path\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.820787 kubelet[2674]: I0912 10:19:36.819279 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cni-path\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.820787 kubelet[2674]: I0912 10:19:36.819284 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:36.820787 kubelet[2674]: I0912 10:19:36.819310 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-bpf-maps\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.821455 kubelet[2674]: I0912 10:19:36.819337 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-host-proc-sys-net\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.821455 kubelet[2674]: I0912 10:19:36.819367 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2af3b83-2edd-44b8-a1b2-4fca01315eff-cilium-config-path\") pod \"d2af3b83-2edd-44b8-a1b2-4fca01315eff\" (UID: \"d2af3b83-2edd-44b8-a1b2-4fca01315eff\") " Sep 12 10:19:36.821455 kubelet[2674]: I0912 10:19:36.819398 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rxmh\" (UniqueName: \"kubernetes.io/projected/d2af3b83-2edd-44b8-a1b2-4fca01315eff-kube-api-access-6rxmh\") pod \"d2af3b83-2edd-44b8-a1b2-4fca01315eff\" (UID: \"d2af3b83-2edd-44b8-a1b2-4fca01315eff\") " Sep 12 10:19:36.821455 kubelet[2674]: I0912 10:19:36.819427 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-clustermesh-secrets\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.821455 kubelet[2674]: I0912 10:19:36.819459 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdlt9\" (UniqueName: \"kubernetes.io/projected/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-kube-api-access-gdlt9\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.821455 kubelet[2674]: I0912 10:19:36.819492 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-cgroup\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.821869 kubelet[2674]: I0912 10:19:36.819526 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-xtables-lock\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.821869 kubelet[2674]: I0912 10:19:36.819556 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-lib-modules\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.821869 kubelet[2674]: I0912 10:19:36.819582 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-host-proc-sys-kernel\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.821869 kubelet[2674]: I0912 10:19:36.819611 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-hubble-tls\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.821869 kubelet[2674]: I0912 10:19:36.819638 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-hostproc\") pod \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\" (UID: \"93bc4f6b-ca4e-49df-ad29-3d7d2f89e884\") " Sep 12 10:19:36.821869 kubelet[2674]: I0912 10:19:36.819752 2674 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-etc-cni-netd\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.822552 kubelet[2674]: I0912 10:19:36.819796 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-hostproc" (OuterVolumeSpecName: "hostproc") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:36.822552 kubelet[2674]: I0912 10:19:36.819831 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cni-path" (OuterVolumeSpecName: "cni-path") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:36.822552 kubelet[2674]: I0912 10:19:36.819857 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:36.822552 kubelet[2674]: I0912 10:19:36.819884 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:36.824591 kubelet[2674]: I0912 10:19:36.824204 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2af3b83-2edd-44b8-a1b2-4fca01315eff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2af3b83-2edd-44b8-a1b2-4fca01315eff" (UID: "d2af3b83-2edd-44b8-a1b2-4fca01315eff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 10:19:36.825573 kubelet[2674]: I0912 10:19:36.825103 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:36.825573 kubelet[2674]: I0912 10:19:36.825202 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:36.830621 kubelet[2674]: I0912 10:19:36.830365 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:36.830621 kubelet[2674]: I0912 10:19:36.830451 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:36.835434 kubelet[2674]: I0912 10:19:36.835316 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:36.835871 kubelet[2674]: I0912 10:19:36.835693 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2af3b83-2edd-44b8-a1b2-4fca01315eff-kube-api-access-6rxmh" (OuterVolumeSpecName: "kube-api-access-6rxmh") pod "d2af3b83-2edd-44b8-a1b2-4fca01315eff" (UID: "d2af3b83-2edd-44b8-a1b2-4fca01315eff"). InnerVolumeSpecName "kube-api-access-6rxmh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:19:36.838518 kubelet[2674]: I0912 10:19:36.838428 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-kube-api-access-gdlt9" (OuterVolumeSpecName: "kube-api-access-gdlt9") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "kube-api-access-gdlt9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:19:36.839354 kubelet[2674]: I0912 10:19:36.839058 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 10:19:36.839757 kubelet[2674]: I0912 10:19:36.839718 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 10:19:36.840421 kubelet[2674]: I0912 10:19:36.840389 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" (UID: "93bc4f6b-ca4e-49df-ad29-3d7d2f89e884"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:19:36.921177 kubelet[2674]: I0912 10:19:36.920968 2674 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-cgroup\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.921177 kubelet[2674]: I0912 10:19:36.921051 2674 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-xtables-lock\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.921177 kubelet[2674]: I0912 10:19:36.921070 2674 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-lib-modules\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.921177 kubelet[2674]: I0912 10:19:36.921087 2674 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-host-proc-sys-kernel\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.921177 kubelet[2674]: I0912 10:19:36.921107 2674 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-hubble-tls\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.921177 kubelet[2674]: I0912 10:19:36.921124 2674 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-hostproc\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.921177 kubelet[2674]: I0912 10:19:36.921140 2674 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-run\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.922786 kubelet[2674]: I0912 10:19:36.922737 2674 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cilium-config-path\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.923008 kubelet[2674]: I0912 10:19:36.922800 2674 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-cni-path\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.923008 kubelet[2674]: I0912 10:19:36.922827 2674 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-bpf-maps\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.923008 kubelet[2674]: I0912 10:19:36.922844 2674 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-host-proc-sys-net\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.923008 kubelet[2674]: I0912 10:19:36.922958 2674 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2af3b83-2edd-44b8-a1b2-4fca01315eff-cilium-config-path\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.923008 kubelet[2674]: I0912 10:19:36.922987 2674 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rxmh\" (UniqueName: \"kubernetes.io/projected/d2af3b83-2edd-44b8-a1b2-4fca01315eff-kube-api-access-6rxmh\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.923008 kubelet[2674]: I0912 10:19:36.923009 2674 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-clustermesh-secrets\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:36.923414 kubelet[2674]: I0912 10:19:36.923026 2674 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gdlt9\" (UniqueName: \"kubernetes.io/projected/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884-kube-api-access-gdlt9\") on node \"ci-4230-2-2-nightly-20250911-2100-377226d477597500f469\" DevicePath \"\"" Sep 12 10:19:37.297179 kubelet[2674]: E0912 10:19:37.297108 2674 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:19:37.332495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8c9c251e3cb99b5532f607ca8e8ade01231651642dbe2c5c9334af2c1b1f4a5-rootfs.mount: Deactivated successfully. Sep 12 10:19:37.332703 systemd[1]: var-lib-kubelet-pods-d2af3b83\x2d2edd\x2d44b8\x2da1b2\x2d4fca01315eff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6rxmh.mount: Deactivated successfully. Sep 12 10:19:37.332832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532-rootfs.mount: Deactivated successfully. Sep 12 10:19:37.332949 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cbdefe0ee4b38e2dc5651b7d239af1e254252c610a6d018d6000fc9812346532-shm.mount: Deactivated successfully. Sep 12 10:19:37.333076 systemd[1]: var-lib-kubelet-pods-93bc4f6b\x2dca4e\x2d49df\x2dad29\x2d3d7d2f89e884-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgdlt9.mount: Deactivated successfully. Sep 12 10:19:37.333199 systemd[1]: var-lib-kubelet-pods-93bc4f6b\x2dca4e\x2d49df\x2dad29\x2d3d7d2f89e884-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 10:19:37.333315 systemd[1]: var-lib-kubelet-pods-93bc4f6b\x2dca4e\x2d49df\x2dad29\x2d3d7d2f89e884-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 10:19:37.667494 kubelet[2674]: I0912 10:19:37.667016 2674 scope.go:117] "RemoveContainer" containerID="ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3" Sep 12 10:19:37.677703 containerd[1513]: time="2025-09-12T10:19:37.677034459Z" level=info msg="RemoveContainer for \"ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3\"" Sep 12 10:19:37.678428 systemd[1]: Removed slice kubepods-burstable-pod93bc4f6b_ca4e_49df_ad29_3d7d2f89e884.slice - libcontainer container kubepods-burstable-pod93bc4f6b_ca4e_49df_ad29_3d7d2f89e884.slice. Sep 12 10:19:37.678627 systemd[1]: kubepods-burstable-pod93bc4f6b_ca4e_49df_ad29_3d7d2f89e884.slice: Consumed 10.727s CPU time, 125M memory peak, 144K read from disk, 13.3M written to disk. Sep 12 10:19:37.683597 systemd[1]: Removed slice kubepods-besteffort-podd2af3b83_2edd_44b8_a1b2_4fca01315eff.slice - libcontainer container kubepods-besteffort-podd2af3b83_2edd_44b8_a1b2_4fca01315eff.slice. Sep 12 10:19:37.690216 containerd[1513]: time="2025-09-12T10:19:37.690152966Z" level=info msg="RemoveContainer for \"ef67070a86f8a4ceb0485f5d0845a283e54d71251bb2f36d5924d7c69d0930a3\" returns successfully" Sep 12 10:19:37.691510 kubelet[2674]: I0912 10:19:37.691478 2674 scope.go:117] "RemoveContainer" containerID="76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9" Sep 12 10:19:37.693727 containerd[1513]: time="2025-09-12T10:19:37.693064186Z" level=info msg="RemoveContainer for \"76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9\"" Sep 12 10:19:37.698866 containerd[1513]: time="2025-09-12T10:19:37.698792433Z" level=info msg="RemoveContainer for \"76f0fa588e95f03cfe29956e6a9411d4713eaac5e7ee1ca5e0df6fb721b37de9\" returns successfully" Sep 12 10:19:37.699675 kubelet[2674]: I0912 10:19:37.699056 2674 scope.go:117] "RemoveContainer" containerID="09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1" Sep 12 10:19:37.701107 containerd[1513]: time="2025-09-12T10:19:37.701064507Z" level=info msg="RemoveContainer for \"09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1\"" Sep 12 10:19:37.711803 containerd[1513]: time="2025-09-12T10:19:37.711728258Z" level=info msg="RemoveContainer for \"09bc5865723266c6059458cb1e3dc3a5289ed798c5ccf6e24b8e2ab7b28d7ce1\" returns successfully" Sep 12 10:19:37.716836 kubelet[2674]: I0912 10:19:37.716584 2674 scope.go:117] "RemoveContainer" containerID="148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5" Sep 12 10:19:37.728090 containerd[1513]: time="2025-09-12T10:19:37.728023754Z" level=info msg="RemoveContainer for \"148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5\"" Sep 12 10:19:37.736840 containerd[1513]: time="2025-09-12T10:19:37.736761590Z" level=info msg="RemoveContainer for \"148afa9bf4e56d604de58bd37c1b7afeec32520f473b844541d70f0be4f2b2d5\" returns successfully" Sep 12 10:19:37.737131 kubelet[2674]: I0912 10:19:37.737096 2674 scope.go:117] "RemoveContainer" containerID="dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66" Sep 12 10:19:37.738634 containerd[1513]: time="2025-09-12T10:19:37.738586126Z" level=info msg="RemoveContainer for \"dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66\"" Sep 12 10:19:37.743132 containerd[1513]: time="2025-09-12T10:19:37.743077169Z" level=info msg="RemoveContainer for \"dc201b46ee68a108acd323a623b341bd39384fbe9c9430cca4303daa2cca1c66\" returns successfully" Sep 12 10:19:38.113981 kubelet[2674]: I0912 10:19:38.113926 2674 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" path="/var/lib/kubelet/pods/93bc4f6b-ca4e-49df-ad29-3d7d2f89e884/volumes" Sep 12 10:19:38.115282 kubelet[2674]: I0912 10:19:38.115241 2674 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2af3b83-2edd-44b8-a1b2-4fca01315eff" path="/var/lib/kubelet/pods/d2af3b83-2edd-44b8-a1b2-4fca01315eff/volumes" Sep 12 10:19:38.299635 sshd[4307]: Connection closed by 139.178.89.65 port 53058 Sep 12 10:19:38.302094 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:38.310235 systemd[1]: sshd@24-10.128.0.19:22-139.178.89.65:53058.service: Deactivated successfully. Sep 12 10:19:38.317759 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 10:19:38.320386 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. Sep 12 10:19:38.322271 systemd-logind[1490]: Removed session 24. Sep 12 10:19:38.378264 systemd[1]: Started sshd@25-10.128.0.19:22-139.178.89.65:53070.service - OpenSSH per-connection server daemon (139.178.89.65:53070). Sep 12 10:19:38.776696 sshd[4473]: Accepted publickey for core from 139.178.89.65 port 53070 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:38.778205 sshd-session[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:38.786324 systemd-logind[1490]: New session 25 of user core. Sep 12 10:19:38.795975 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 10:19:38.922735 ntpd[1476]: Deleting interface #11 lxc_health, fe80::380d:96ff:fe8a:4f5%8#123, interface stats: received=0, sent=0, dropped=0, active_time=99 secs Sep 12 10:19:38.923251 ntpd[1476]: 12 Sep 10:19:38 ntpd[1476]: Deleting interface #11 lxc_health, fe80::380d:96ff:fe8a:4f5%8#123, interface stats: received=0, sent=0, dropped=0, active_time=99 secs Sep 12 10:19:39.810845 kubelet[2674]: I0912 10:19:39.810715 2674 memory_manager.go:355] "RemoveStaleState removing state" podUID="93bc4f6b-ca4e-49df-ad29-3d7d2f89e884" containerName="cilium-agent" Sep 12 10:19:39.810845 kubelet[2674]: I0912 10:19:39.810798 2674 memory_manager.go:355] "RemoveStaleState removing state" podUID="d2af3b83-2edd-44b8-a1b2-4fca01315eff" containerName="cilium-operator" Sep 12 10:19:39.836590 sshd[4475]: Connection closed by 139.178.89.65 port 53070 Sep 12 10:19:39.839573 systemd[1]: Created slice kubepods-burstable-pod382ba588_623f_41d9_8275_f59b30748c6e.slice - libcontainer container kubepods-burstable-pod382ba588_623f_41d9_8275_f59b30748c6e.slice. Sep 12 10:19:39.843567 sshd-session[4473]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:39.847939 kubelet[2674]: I0912 10:19:39.846541 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/382ba588-623f-41d9-8275-f59b30748c6e-cni-path\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.847939 kubelet[2674]: I0912 10:19:39.846589 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/382ba588-623f-41d9-8275-f59b30748c6e-cilium-ipsec-secrets\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.847939 kubelet[2674]: I0912 10:19:39.846630 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/382ba588-623f-41d9-8275-f59b30748c6e-host-proc-sys-net\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.849877 kubelet[2674]: I0912 10:19:39.848881 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/382ba588-623f-41d9-8275-f59b30748c6e-lib-modules\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.849877 kubelet[2674]: I0912 10:19:39.848939 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/382ba588-623f-41d9-8275-f59b30748c6e-xtables-lock\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.849877 kubelet[2674]: I0912 10:19:39.848974 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/382ba588-623f-41d9-8275-f59b30748c6e-hostproc\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.849877 kubelet[2674]: I0912 10:19:39.849003 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/382ba588-623f-41d9-8275-f59b30748c6e-cilium-cgroup\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.849877 kubelet[2674]: I0912 10:19:39.849035 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/382ba588-623f-41d9-8275-f59b30748c6e-clustermesh-secrets\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.849877 kubelet[2674]: I0912 10:19:39.849065 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/382ba588-623f-41d9-8275-f59b30748c6e-host-proc-sys-kernel\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.850274 kubelet[2674]: I0912 10:19:39.849094 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/382ba588-623f-41d9-8275-f59b30748c6e-hubble-tls\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.850274 kubelet[2674]: I0912 10:19:39.849123 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdtbc\" (UniqueName: \"kubernetes.io/projected/382ba588-623f-41d9-8275-f59b30748c6e-kube-api-access-mdtbc\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.852361 kubelet[2674]: I0912 10:19:39.849158 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/382ba588-623f-41d9-8275-f59b30748c6e-cilium-run\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.852361 kubelet[2674]: I0912 10:19:39.852204 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/382ba588-623f-41d9-8275-f59b30748c6e-etc-cni-netd\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.852361 kubelet[2674]: I0912 10:19:39.852247 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/382ba588-623f-41d9-8275-f59b30748c6e-cilium-config-path\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.852361 kubelet[2674]: I0912 10:19:39.852279 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/382ba588-623f-41d9-8275-f59b30748c6e-bpf-maps\") pod \"cilium-s7smx\" (UID: \"382ba588-623f-41d9-8275-f59b30748c6e\") " pod="kube-system/cilium-s7smx" Sep 12 10:19:39.861581 systemd[1]: sshd@25-10.128.0.19:22-139.178.89.65:53070.service: Deactivated successfully. Sep 12 10:19:39.862341 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. Sep 12 10:19:39.868487 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 10:19:39.873081 systemd-logind[1490]: Removed session 25. Sep 12 10:19:39.912424 systemd[1]: Started sshd@26-10.128.0.19:22-139.178.89.65:53080.service - OpenSSH per-connection server daemon (139.178.89.65:53080). Sep 12 10:19:40.161845 containerd[1513]: time="2025-09-12T10:19:40.161673212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7smx,Uid:382ba588-623f-41d9-8275-f59b30748c6e,Namespace:kube-system,Attempt:0,}" Sep 12 10:19:40.214485 containerd[1513]: time="2025-09-12T10:19:40.214329225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:19:40.214485 containerd[1513]: time="2025-09-12T10:19:40.214406282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:19:40.214485 containerd[1513]: time="2025-09-12T10:19:40.214425910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:19:40.214951 containerd[1513]: time="2025-09-12T10:19:40.214562188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:19:40.255228 systemd[1]: Started cri-containerd-28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979.scope - libcontainer container 28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979. Sep 12 10:19:40.300298 containerd[1513]: time="2025-09-12T10:19:40.300084078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7smx,Uid:382ba588-623f-41d9-8275-f59b30748c6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979\"" Sep 12 10:19:40.306957 containerd[1513]: time="2025-09-12T10:19:40.306896011Z" level=info msg="CreateContainer within sandbox \"28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:19:40.321436 sshd[4489]: Accepted publickey for core from 139.178.89.65 port 53080 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:40.324170 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:40.330357 containerd[1513]: time="2025-09-12T10:19:40.329492651Z" level=info msg="CreateContainer within sandbox \"28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"da73b0baca3e3b3d49da533fe0d48ed7b791cf4f662348f21952f64c638eb951\"" Sep 12 10:19:40.330910 containerd[1513]: time="2025-09-12T10:19:40.330846783Z" level=info msg="StartContainer for \"da73b0baca3e3b3d49da533fe0d48ed7b791cf4f662348f21952f64c638eb951\"" Sep 12 10:19:40.346126 systemd-logind[1490]: New session 26 of user core. Sep 12 10:19:40.351940 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 10:19:40.390911 systemd[1]: Started cri-containerd-da73b0baca3e3b3d49da533fe0d48ed7b791cf4f662348f21952f64c638eb951.scope - libcontainer container da73b0baca3e3b3d49da533fe0d48ed7b791cf4f662348f21952f64c638eb951. Sep 12 10:19:40.439636 containerd[1513]: time="2025-09-12T10:19:40.439451300Z" level=info msg="StartContainer for \"da73b0baca3e3b3d49da533fe0d48ed7b791cf4f662348f21952f64c638eb951\" returns successfully" Sep 12 10:19:40.457260 systemd[1]: cri-containerd-da73b0baca3e3b3d49da533fe0d48ed7b791cf4f662348f21952f64c638eb951.scope: Deactivated successfully. Sep 12 10:19:40.502567 containerd[1513]: time="2025-09-12T10:19:40.502436977Z" level=info msg="shim disconnected" id=da73b0baca3e3b3d49da533fe0d48ed7b791cf4f662348f21952f64c638eb951 namespace=k8s.io Sep 12 10:19:40.502567 containerd[1513]: time="2025-09-12T10:19:40.502560183Z" level=warning msg="cleaning up after shim disconnected" id=da73b0baca3e3b3d49da533fe0d48ed7b791cf4f662348f21952f64c638eb951 namespace=k8s.io Sep 12 10:19:40.502567 containerd[1513]: time="2025-09-12T10:19:40.502575229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:40.580406 sshd[4545]: Connection closed by 139.178.89.65 port 53080 Sep 12 10:19:40.582056 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:40.589287 systemd[1]: sshd@26-10.128.0.19:22-139.178.89.65:53080.service: Deactivated successfully. Sep 12 10:19:40.593836 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 10:19:40.595110 systemd-logind[1490]: Session 26 logged out. Waiting for processes to exit. Sep 12 10:19:40.596976 systemd-logind[1490]: Removed session 26. Sep 12 10:19:40.661155 systemd[1]: Started sshd@27-10.128.0.19:22-139.178.89.65:50436.service - OpenSSH per-connection server daemon (139.178.89.65:50436). Sep 12 10:19:40.695200 containerd[1513]: time="2025-09-12T10:19:40.693884415Z" level=info msg="CreateContainer within sandbox \"28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:19:40.712513 containerd[1513]: time="2025-09-12T10:19:40.711869572Z" level=info msg="CreateContainer within sandbox \"28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5d7ce874eb45273917b65bf274ecca4bca458f1eb48d7cb606a6d31dc66fbe14\"" Sep 12 10:19:40.713774 containerd[1513]: time="2025-09-12T10:19:40.712904704Z" level=info msg="StartContainer for \"5d7ce874eb45273917b65bf274ecca4bca458f1eb48d7cb606a6d31dc66fbe14\"" Sep 12 10:19:40.774013 systemd[1]: Started cri-containerd-5d7ce874eb45273917b65bf274ecca4bca458f1eb48d7cb606a6d31dc66fbe14.scope - libcontainer container 5d7ce874eb45273917b65bf274ecca4bca458f1eb48d7cb606a6d31dc66fbe14. Sep 12 10:19:40.826138 containerd[1513]: time="2025-09-12T10:19:40.826055403Z" level=info msg="StartContainer for \"5d7ce874eb45273917b65bf274ecca4bca458f1eb48d7cb606a6d31dc66fbe14\" returns successfully" Sep 12 10:19:40.836502 systemd[1]: cri-containerd-5d7ce874eb45273917b65bf274ecca4bca458f1eb48d7cb606a6d31dc66fbe14.scope: Deactivated successfully. Sep 12 10:19:40.885072 containerd[1513]: time="2025-09-12T10:19:40.884906127Z" level=info msg="shim disconnected" id=5d7ce874eb45273917b65bf274ecca4bca458f1eb48d7cb606a6d31dc66fbe14 namespace=k8s.io Sep 12 10:19:40.885072 containerd[1513]: time="2025-09-12T10:19:40.885067529Z" level=warning msg="cleaning up after shim disconnected" id=5d7ce874eb45273917b65bf274ecca4bca458f1eb48d7cb606a6d31dc66fbe14 namespace=k8s.io Sep 12 10:19:40.885072 containerd[1513]: time="2025-09-12T10:19:40.885090548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:41.063563 sshd[4608]: Accepted publickey for core from 139.178.89.65 port 50436 ssh2: RSA SHA256:anthkU0aLZsV4K+HRRESC6qqQ4s1PzrdVmL0QQYZOHo Sep 12 10:19:41.066198 sshd-session[4608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:41.075063 systemd-logind[1490]: New session 27 of user core. Sep 12 10:19:41.086004 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 10:19:41.692374 containerd[1513]: time="2025-09-12T10:19:41.692319788Z" level=info msg="CreateContainer within sandbox \"28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:19:41.723236 containerd[1513]: time="2025-09-12T10:19:41.723126692Z" level=info msg="CreateContainer within sandbox \"28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"70c265a4acfb67fe67f2c970600173759a4089e02c794c57317b74b0dbea7951\"" Sep 12 10:19:41.725364 containerd[1513]: time="2025-09-12T10:19:41.724276642Z" level=info msg="StartContainer for \"70c265a4acfb67fe67f2c970600173759a4089e02c794c57317b74b0dbea7951\"" Sep 12 10:19:41.805957 systemd[1]: Started cri-containerd-70c265a4acfb67fe67f2c970600173759a4089e02c794c57317b74b0dbea7951.scope - libcontainer container 70c265a4acfb67fe67f2c970600173759a4089e02c794c57317b74b0dbea7951. Sep 12 10:19:41.855705 containerd[1513]: time="2025-09-12T10:19:41.855423764Z" level=info msg="StartContainer for \"70c265a4acfb67fe67f2c970600173759a4089e02c794c57317b74b0dbea7951\" returns successfully" Sep 12 10:19:41.862930 systemd[1]: cri-containerd-70c265a4acfb67fe67f2c970600173759a4089e02c794c57317b74b0dbea7951.scope: Deactivated successfully. Sep 12 10:19:41.901511 containerd[1513]: time="2025-09-12T10:19:41.901119843Z" level=info msg="shim disconnected" id=70c265a4acfb67fe67f2c970600173759a4089e02c794c57317b74b0dbea7951 namespace=k8s.io Sep 12 10:19:41.901511 containerd[1513]: time="2025-09-12T10:19:41.901229863Z" level=warning msg="cleaning up after shim disconnected" id=70c265a4acfb67fe67f2c970600173759a4089e02c794c57317b74b0dbea7951 namespace=k8s.io Sep 12 10:19:41.901511 containerd[1513]: time="2025-09-12T10:19:41.901247423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:41.969561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70c265a4acfb67fe67f2c970600173759a4089e02c794c57317b74b0dbea7951-rootfs.mount: Deactivated successfully. Sep 12 10:19:42.299089 kubelet[2674]: E0912 10:19:42.299029 2674 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:19:42.699614 containerd[1513]: time="2025-09-12T10:19:42.698251317Z" level=info msg="CreateContainer within sandbox \"28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:19:42.736397 containerd[1513]: time="2025-09-12T10:19:42.736266014Z" level=info msg="CreateContainer within sandbox \"28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a7130f6102558951268c4ebf92203351758bf47da8b7392a4ba256fdb0beb49a\"" Sep 12 10:19:42.745093 containerd[1513]: time="2025-09-12T10:19:42.745024825Z" level=info msg="StartContainer for \"a7130f6102558951268c4ebf92203351758bf47da8b7392a4ba256fdb0beb49a\"" Sep 12 10:19:42.822980 systemd[1]: Started cri-containerd-a7130f6102558951268c4ebf92203351758bf47da8b7392a4ba256fdb0beb49a.scope - libcontainer container a7130f6102558951268c4ebf92203351758bf47da8b7392a4ba256fdb0beb49a. Sep 12 10:19:42.866607 systemd[1]: cri-containerd-a7130f6102558951268c4ebf92203351758bf47da8b7392a4ba256fdb0beb49a.scope: Deactivated successfully. Sep 12 10:19:42.874555 containerd[1513]: time="2025-09-12T10:19:42.874492395Z" level=info msg="StartContainer for \"a7130f6102558951268c4ebf92203351758bf47da8b7392a4ba256fdb0beb49a\" returns successfully" Sep 12 10:19:42.912724 containerd[1513]: time="2025-09-12T10:19:42.912591209Z" level=info msg="shim disconnected" id=a7130f6102558951268c4ebf92203351758bf47da8b7392a4ba256fdb0beb49a namespace=k8s.io Sep 12 10:19:42.912724 containerd[1513]: time="2025-09-12T10:19:42.912715029Z" level=warning msg="cleaning up after shim disconnected" id=a7130f6102558951268c4ebf92203351758bf47da8b7392a4ba256fdb0beb49a namespace=k8s.io Sep 12 10:19:42.912724 containerd[1513]: time="2025-09-12T10:19:42.912729832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:42.969581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7130f6102558951268c4ebf92203351758bf47da8b7392a4ba256fdb0beb49a-rootfs.mount: Deactivated successfully. Sep 12 10:19:43.705068 containerd[1513]: time="2025-09-12T10:19:43.704507082Z" level=info msg="CreateContainer within sandbox \"28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:19:43.738279 containerd[1513]: time="2025-09-12T10:19:43.737249700Z" level=info msg="CreateContainer within sandbox \"28842fed7b8dd2dbcb15f202384afb4357de25e9dde36728a3557491795b0979\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"57887a0e9962e1937c6f4163a33350ed94e79cb085041d71dfc40b85c6192cba\"" Sep 12 10:19:43.739724 containerd[1513]: time="2025-09-12T10:19:43.738696678Z" level=info msg="StartContainer for \"57887a0e9962e1937c6f4163a33350ed94e79cb085041d71dfc40b85c6192cba\"" Sep 12 10:19:43.810975 systemd[1]: Started cri-containerd-57887a0e9962e1937c6f4163a33350ed94e79cb085041d71dfc40b85c6192cba.scope - libcontainer container 57887a0e9962e1937c6f4163a33350ed94e79cb085041d71dfc40b85c6192cba. Sep 12 10:19:43.860526 containerd[1513]: time="2025-09-12T10:19:43.860324969Z" level=info msg="StartContainer for \"57887a0e9962e1937c6f4163a33350ed94e79cb085041d71dfc40b85c6192cba\" returns successfully" Sep 12 10:19:44.338525 kubelet[2674]: I0912 10:19:44.338430 2674 setters.go:602] "Node became not ready" node="ci-4230-2-2-nightly-20250911-2100-377226d477597500f469" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T10:19:44Z","lastTransitionTime":"2025-09-12T10:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 10:19:44.454725 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 10:19:44.730826 kubelet[2674]: I0912 10:19:44.729423 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s7smx" podStartSLOduration=5.729391219 podStartE2EDuration="5.729391219s" podCreationTimestamp="2025-09-12 10:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:19:44.729129494 +0000 UTC m=+132.869090423" watchObservedRunningTime="2025-09-12 10:19:44.729391219 +0000 UTC m=+132.869352137" Sep 12 10:19:47.956777 systemd-networkd[1381]: lxc_health: Link UP Sep 12 10:19:47.967145 systemd-networkd[1381]: lxc_health: Gained carrier Sep 12 10:19:49.375021 systemd-networkd[1381]: lxc_health: Gained IPv6LL Sep 12 10:19:50.070569 systemd[1]: run-containerd-runc-k8s.io-57887a0e9962e1937c6f4163a33350ed94e79cb085041d71dfc40b85c6192cba-runc.3Icqvp.mount: Deactivated successfully. Sep 12 10:19:51.922711 ntpd[1476]: Listen normally on 14 lxc_health [fe80::309a:c9ff:fe85:2bac%14]:123 Sep 12 10:19:51.923536 ntpd[1476]: 12 Sep 10:19:51 ntpd[1476]: Listen normally on 14 lxc_health [fe80::309a:c9ff:fe85:2bac%14]:123 Sep 12 10:19:52.388933 systemd[1]: run-containerd-runc-k8s.io-57887a0e9962e1937c6f4163a33350ed94e79cb085041d71dfc40b85c6192cba-runc.W8uPve.mount: Deactivated successfully. Sep 12 10:19:52.499210 kubelet[2674]: E0912 10:19:52.499032 2674 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34056->127.0.0.1:32915: write tcp 127.0.0.1:34056->127.0.0.1:32915: write: broken pipe Sep 12 10:19:54.594635 systemd[1]: run-containerd-runc-k8s.io-57887a0e9962e1937c6f4163a33350ed94e79cb085041d71dfc40b85c6192cba-runc.HcyMuy.mount: Deactivated successfully. Sep 12 10:19:54.734025 sshd[4671]: Connection closed by 139.178.89.65 port 50436 Sep 12 10:19:54.735556 sshd-session[4608]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:54.741688 systemd[1]: sshd@27-10.128.0.19:22-139.178.89.65:50436.service: Deactivated successfully. Sep 12 10:19:54.747181 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 10:19:54.751043 systemd-logind[1490]: Session 27 logged out. Waiting for processes to exit. Sep 12 10:19:54.757735 systemd-logind[1490]: Removed session 27.