Sep 16 05:03:33.620251 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 05:03:33.620302 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:03:33.620321 kernel: BIOS-provided physical RAM map: Sep 16 05:03:33.620334 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 16 05:03:33.620347 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 16 05:03:33.620361 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 16 05:03:33.620381 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 16 05:03:33.620396 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 16 05:03:33.620410 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd329fff] usable Sep 16 05:03:33.620424 kernel: BIOS-e820: [mem 0x00000000bd32a000-0x00000000bd331fff] ACPI data Sep 16 05:03:33.620439 kernel: BIOS-e820: [mem 0x00000000bd332000-0x00000000bf8ecfff] usable Sep 16 05:03:33.620454 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Sep 16 05:03:33.620468 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 16 05:03:33.620483 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 16 05:03:33.620505 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 16 05:03:33.620522 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 16 05:03:33.620538 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 16 05:03:33.620586 kernel: NX (Execute Disable) protection: active Sep 16 05:03:33.620608 kernel: APIC: Static calls initialized Sep 16 05:03:33.620623 kernel: efi: EFI v2.7 by EDK II Sep 16 05:03:33.620638 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32a018 Sep 16 05:03:33.620652 kernel: random: crng init done Sep 16 05:03:33.620673 kernel: secureboot: Secure boot disabled Sep 16 05:03:33.620689 kernel: SMBIOS 2.4 present. Sep 16 05:03:33.620706 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 16 05:03:33.620721 kernel: DMI: Memory slots populated: 1/1 Sep 16 05:03:33.620737 kernel: Hypervisor detected: KVM Sep 16 05:03:33.620754 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 05:03:33.620770 kernel: kvm-clock: using sched offset of 14920104557 cycles Sep 16 05:03:33.620786 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 05:03:33.620801 kernel: tsc: Detected 2299.998 MHz processor Sep 16 05:03:33.620817 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 05:03:33.620838 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 05:03:33.620853 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 16 05:03:33.620870 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 16 05:03:33.620887 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 05:03:33.620905 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 16 05:03:33.620920 kernel: Using GB pages for direct mapping Sep 16 05:03:33.620934 kernel: ACPI: Early table checksum verification disabled Sep 16 05:03:33.620949 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 16 05:03:33.620976 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 16 05:03:33.620993 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 16 05:03:33.621010 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 16 05:03:33.621027 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 16 05:03:33.621045 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 16 05:03:33.621063 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 16 05:03:33.621085 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 16 05:03:33.621103 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 16 05:03:33.621120 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 16 05:03:33.621138 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 16 05:03:33.621156 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 16 05:03:33.621173 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 16 05:03:33.621190 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 16 05:03:33.621206 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 16 05:03:33.621224 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 16 05:03:33.621245 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 16 05:03:33.621262 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 16 05:03:33.621280 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 16 05:03:33.621298 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 16 05:03:33.621315 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 16 05:03:33.621331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 16 05:03:33.621348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 16 05:03:33.621363 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Sep 16 05:03:33.621380 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Sep 16 05:03:33.621400 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Sep 16 05:03:33.621416 kernel: Zone ranges: Sep 16 05:03:33.621432 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 05:03:33.621449 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 16 05:03:33.621466 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 16 05:03:33.621483 kernel: Device empty Sep 16 05:03:33.621501 kernel: Movable zone start for each node Sep 16 05:03:33.621518 kernel: Early memory node ranges Sep 16 05:03:33.621536 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 16 05:03:33.623606 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 16 05:03:33.623632 kernel: node 0: [mem 0x0000000000100000-0x00000000bd329fff] Sep 16 05:03:33.623650 kernel: node 0: [mem 0x00000000bd332000-0x00000000bf8ecfff] Sep 16 05:03:33.623668 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 16 05:03:33.623685 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 16 05:03:33.623703 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 16 05:03:33.623721 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 05:03:33.623739 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 16 05:03:33.623756 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 16 05:03:33.623774 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Sep 16 05:03:33.623798 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 16 05:03:33.623816 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 16 05:03:33.623833 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 16 05:03:33.623851 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 05:03:33.623868 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 16 05:03:33.623886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 05:03:33.623903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 05:03:33.623921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 05:03:33.623939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 05:03:33.623960 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 05:03:33.623978 kernel: CPU topo: Max. logical packages: 1 Sep 16 05:03:33.623995 kernel: CPU topo: Max. logical dies: 1 Sep 16 05:03:33.624013 kernel: CPU topo: Max. dies per package: 1 Sep 16 05:03:33.624030 kernel: CPU topo: Max. threads per core: 2 Sep 16 05:03:33.624048 kernel: CPU topo: Num. cores per package: 1 Sep 16 05:03:33.624065 kernel: CPU topo: Num. threads per package: 2 Sep 16 05:03:33.624083 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 16 05:03:33.624100 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 16 05:03:33.624122 kernel: Booting paravirtualized kernel on KVM Sep 16 05:03:33.624139 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 05:03:33.624157 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 16 05:03:33.624175 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 16 05:03:33.624192 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 16 05:03:33.624209 kernel: pcpu-alloc: [0] 0 1 Sep 16 05:03:33.624226 kernel: kvm-guest: PV spinlocks enabled Sep 16 05:03:33.624244 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 05:03:33.624263 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:03:33.624285 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 05:03:33.624303 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 16 05:03:33.624321 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 05:03:33.624336 kernel: Fallback order for Node 0: 0 Sep 16 05:03:33.624353 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Sep 16 05:03:33.624370 kernel: Policy zone: Normal Sep 16 05:03:33.624388 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 05:03:33.624406 kernel: software IO TLB: area num 2. Sep 16 05:03:33.624440 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 05:03:33.624459 kernel: Kernel/User page tables isolation: enabled Sep 16 05:03:33.624477 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 05:03:33.624499 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 05:03:33.624518 kernel: Dynamic Preempt: voluntary Sep 16 05:03:33.624536 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 05:03:33.632478 kernel: rcu: RCU event tracing is enabled. Sep 16 05:03:33.632513 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 05:03:33.632533 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 05:03:33.632586 kernel: Rude variant of Tasks RCU enabled. Sep 16 05:03:33.632603 kernel: Tracing variant of Tasks RCU enabled. Sep 16 05:03:33.632620 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 05:03:33.632636 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 05:03:33.632668 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 05:03:33.632686 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 05:03:33.632703 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 05:03:33.632720 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 16 05:03:33.632744 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 05:03:33.632761 kernel: Console: colour dummy device 80x25 Sep 16 05:03:33.632778 kernel: printk: legacy console [ttyS0] enabled Sep 16 05:03:33.632796 kernel: ACPI: Core revision 20240827 Sep 16 05:03:33.632813 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 05:03:33.632831 kernel: x2apic enabled Sep 16 05:03:33.632849 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 05:03:33.632867 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 16 05:03:33.632885 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 16 05:03:33.632906 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 16 05:03:33.632924 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 16 05:03:33.632942 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 16 05:03:33.632961 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 05:03:33.632979 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 16 05:03:33.632997 kernel: Spectre V2 : Mitigation: IBRS Sep 16 05:03:33.633016 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 05:03:33.633034 kernel: RETBleed: Mitigation: IBRS Sep 16 05:03:33.633051 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 16 05:03:33.633073 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 16 05:03:33.633091 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 16 05:03:33.633109 kernel: MDS: Mitigation: Clear CPU buffers Sep 16 05:03:33.633128 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 16 05:03:33.633146 kernel: active return thunk: its_return_thunk Sep 16 05:03:33.633164 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 16 05:03:33.633183 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 05:03:33.633201 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 05:03:33.633223 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 05:03:33.633241 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 05:03:33.633260 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 16 05:03:33.633278 kernel: Freeing SMP alternatives memory: 32K Sep 16 05:03:33.633297 kernel: pid_max: default: 32768 minimum: 301 Sep 16 05:03:33.633315 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 05:03:33.633332 kernel: landlock: Up and running. Sep 16 05:03:33.633350 kernel: SELinux: Initializing. Sep 16 05:03:33.633369 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 05:03:33.633389 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 05:03:33.633407 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 16 05:03:33.633424 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 16 05:03:33.633441 kernel: signal: max sigframe size: 1776 Sep 16 05:03:33.633459 kernel: rcu: Hierarchical SRCU implementation. Sep 16 05:03:33.633478 kernel: rcu: Max phase no-delay instances is 400. Sep 16 05:03:33.633495 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 05:03:33.633513 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 16 05:03:33.633529 kernel: smp: Bringing up secondary CPUs ... Sep 16 05:03:33.633552 kernel: smpboot: x86: Booting SMP configuration: Sep 16 05:03:33.633591 kernel: .... node #0, CPUs: #1 Sep 16 05:03:33.633611 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 16 05:03:33.633630 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 16 05:03:33.633648 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 05:03:33.633666 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 16 05:03:33.633684 kernel: Memory: 7564024K/7860552K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 290704K reserved, 0K cma-reserved) Sep 16 05:03:33.633701 kernel: devtmpfs: initialized Sep 16 05:03:33.633718 kernel: x86/mm: Memory block size: 128MB Sep 16 05:03:33.633740 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 16 05:03:33.633757 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 05:03:33.633775 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 05:03:33.633792 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 05:03:33.633808 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 05:03:33.633826 kernel: audit: initializing netlink subsys (disabled) Sep 16 05:03:33.633844 kernel: audit: type=2000 audit(1757999009.085:1): state=initialized audit_enabled=0 res=1 Sep 16 05:03:33.633862 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 05:03:33.633885 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 05:03:33.633903 kernel: cpuidle: using governor menu Sep 16 05:03:33.633921 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 05:03:33.633938 kernel: dca service started, version 1.12.1 Sep 16 05:03:33.633956 kernel: PCI: Using configuration type 1 for base access Sep 16 05:03:33.633974 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 05:03:33.633992 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 05:03:33.634010 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 05:03:33.634027 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 05:03:33.634050 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 05:03:33.634067 kernel: ACPI: Added _OSI(Module Device) Sep 16 05:03:33.634086 kernel: ACPI: Added _OSI(Processor Device) Sep 16 05:03:33.634104 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 05:03:33.634120 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 16 05:03:33.634138 kernel: ACPI: Interpreter enabled Sep 16 05:03:33.634154 kernel: ACPI: PM: (supports S0 S3 S5) Sep 16 05:03:33.634169 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 05:03:33.634186 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 05:03:33.634209 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 16 05:03:33.634226 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 16 05:03:33.634243 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 05:03:33.634514 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 16 05:03:33.634747 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 16 05:03:33.634933 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 16 05:03:33.634956 kernel: PCI host bridge to bus 0000:00 Sep 16 05:03:33.635132 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 05:03:33.635306 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 05:03:33.635472 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 05:03:33.635682 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 16 05:03:33.635848 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 05:03:33.636051 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 16 05:03:33.636251 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 16 05:03:33.636451 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 16 05:03:33.636672 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 16 05:03:33.636875 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Sep 16 05:03:33.637060 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Sep 16 05:03:33.637244 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Sep 16 05:03:33.637435 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 16 05:03:33.637651 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Sep 16 05:03:33.637839 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Sep 16 05:03:33.638033 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 16 05:03:33.638220 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Sep 16 05:03:33.638406 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Sep 16 05:03:33.638430 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 05:03:33.638450 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 05:03:33.638474 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 05:03:33.638493 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 05:03:33.638512 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 16 05:03:33.638531 kernel: iommu: Default domain type: Translated Sep 16 05:03:33.638551 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 05:03:33.638599 kernel: efivars: Registered efivars operations Sep 16 05:03:33.638616 kernel: PCI: Using ACPI for IRQ routing Sep 16 05:03:33.638632 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 05:03:33.638649 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 16 05:03:33.638671 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 16 05:03:33.638687 kernel: e820: reserve RAM buffer [mem 0xbd32a000-0xbfffffff] Sep 16 05:03:33.638705 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 16 05:03:33.638722 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 16 05:03:33.638740 kernel: vgaarb: loaded Sep 16 05:03:33.638758 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 05:03:33.638775 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 05:03:33.638793 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 05:03:33.638811 kernel: pnp: PnP ACPI init Sep 16 05:03:33.638833 kernel: pnp: PnP ACPI: found 7 devices Sep 16 05:03:33.638850 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 05:03:33.638868 kernel: NET: Registered PF_INET protocol family Sep 16 05:03:33.638885 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 16 05:03:33.638903 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 16 05:03:33.638920 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 05:03:33.638938 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 05:03:33.638959 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 16 05:03:33.638980 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 16 05:03:33.639005 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 16 05:03:33.639023 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 16 05:03:33.639040 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 05:03:33.639058 kernel: NET: Registered PF_XDP protocol family Sep 16 05:03:33.639246 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 05:03:33.639422 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 05:03:33.639642 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 05:03:33.639811 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 16 05:03:33.640008 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 16 05:03:33.640034 kernel: PCI: CLS 0 bytes, default 64 Sep 16 05:03:33.640053 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 16 05:03:33.640072 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 16 05:03:33.640092 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 16 05:03:33.640111 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 16 05:03:33.640130 kernel: clocksource: Switched to clocksource tsc Sep 16 05:03:33.640149 kernel: Initialise system trusted keyrings Sep 16 05:03:33.640172 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 16 05:03:33.640191 kernel: Key type asymmetric registered Sep 16 05:03:33.640210 kernel: Asymmetric key parser 'x509' registered Sep 16 05:03:33.640228 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 05:03:33.640247 kernel: io scheduler mq-deadline registered Sep 16 05:03:33.640266 kernel: io scheduler kyber registered Sep 16 05:03:33.640284 kernel: io scheduler bfq registered Sep 16 05:03:33.640303 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 05:03:33.640323 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 16 05:03:33.640514 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 16 05:03:33.640539 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 16 05:03:33.651357 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 16 05:03:33.651397 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 16 05:03:33.651612 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 16 05:03:33.651636 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 05:03:33.651655 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 05:03:33.651673 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 16 05:03:33.651692 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 16 05:03:33.651717 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 16 05:03:33.651910 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 16 05:03:33.651935 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 05:03:33.651953 kernel: i8042: Warning: Keylock active Sep 16 05:03:33.651972 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 05:03:33.651988 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 05:03:33.652194 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 16 05:03:33.652380 kernel: rtc_cmos 00:00: registered as rtc0 Sep 16 05:03:33.652724 kernel: rtc_cmos 00:00: setting system clock to 2025-09-16T05:03:32 UTC (1757999012) Sep 16 05:03:33.652904 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 16 05:03:33.652929 kernel: intel_pstate: CPU model not supported Sep 16 05:03:33.652949 kernel: pstore: Using crash dump compression: deflate Sep 16 05:03:33.652974 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 05:03:33.652994 kernel: NET: Registered PF_INET6 protocol family Sep 16 05:03:33.653010 kernel: Segment Routing with IPv6 Sep 16 05:03:33.653050 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 05:03:33.653104 kernel: NET: Registered PF_PACKET protocol family Sep 16 05:03:33.653138 kernel: Key type dns_resolver registered Sep 16 05:03:33.653157 kernel: IPI shorthand broadcast: enabled Sep 16 05:03:33.653176 kernel: sched_clock: Marking stable (3443004558, 139946202)->(3611457374, -28506614) Sep 16 05:03:33.653195 kernel: registered taskstats version 1 Sep 16 05:03:33.653214 kernel: Loading compiled-in X.509 certificates Sep 16 05:03:33.653233 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 05:03:33.653251 kernel: Demotion targets for Node 0: null Sep 16 05:03:33.653270 kernel: Key type .fscrypt registered Sep 16 05:03:33.653292 kernel: Key type fscrypt-provisioning registered Sep 16 05:03:33.653311 kernel: ima: Allocated hash algorithm: sha1 Sep 16 05:03:33.653330 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 05:03:33.653347 kernel: ima: No architecture policies found Sep 16 05:03:33.653366 kernel: clk: Disabling unused clocks Sep 16 05:03:33.653385 kernel: Warning: unable to open an initial console. Sep 16 05:03:33.653405 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 05:03:33.653424 kernel: Write protecting the kernel read-only data: 24576k Sep 16 05:03:33.653446 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 05:03:33.653464 kernel: Run /init as init process Sep 16 05:03:33.653483 kernel: with arguments: Sep 16 05:03:33.653502 kernel: /init Sep 16 05:03:33.653521 kernel: with environment: Sep 16 05:03:33.653539 kernel: HOME=/ Sep 16 05:03:33.653589 kernel: TERM=linux Sep 16 05:03:33.653609 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 05:03:33.653630 systemd[1]: Successfully made /usr/ read-only. Sep 16 05:03:33.653658 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 05:03:33.653679 systemd[1]: Detected virtualization google. Sep 16 05:03:33.653698 systemd[1]: Detected architecture x86-64. Sep 16 05:03:33.653717 systemd[1]: Running in initrd. Sep 16 05:03:33.653736 systemd[1]: No hostname configured, using default hostname. Sep 16 05:03:33.653757 systemd[1]: Hostname set to . Sep 16 05:03:33.653776 systemd[1]: Initializing machine ID from random generator. Sep 16 05:03:33.653800 systemd[1]: Queued start job for default target initrd.target. Sep 16 05:03:33.653969 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:03:33.653992 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:03:33.654014 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 05:03:33.654035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 05:03:33.654055 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 05:03:33.654081 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 05:03:33.654103 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 05:03:33.654124 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 05:03:33.654145 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:03:33.654166 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:03:33.654187 systemd[1]: Reached target paths.target - Path Units. Sep 16 05:03:33.654207 systemd[1]: Reached target slices.target - Slice Units. Sep 16 05:03:33.654231 systemd[1]: Reached target swap.target - Swaps. Sep 16 05:03:33.654252 systemd[1]: Reached target timers.target - Timer Units. Sep 16 05:03:33.654273 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 05:03:33.654292 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 05:03:33.654313 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 05:03:33.654333 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 05:03:33.654354 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:03:33.654374 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 05:03:33.654399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:03:33.654420 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 05:03:33.654440 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 05:03:33.654461 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 05:03:33.654481 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 05:03:33.654503 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 05:03:33.654523 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 05:03:33.654544 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 05:03:33.654577 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 05:03:33.654611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:03:33.654673 systemd-journald[207]: Collecting audit messages is disabled. Sep 16 05:03:33.654719 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 05:03:33.654745 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:03:33.654767 systemd-journald[207]: Journal started Sep 16 05:03:33.654808 systemd-journald[207]: Runtime Journal (/run/log/journal/c677f915df094faa8b21735bdcf69f05) is 8M, max 148.9M, 140.9M free. Sep 16 05:03:33.615828 systemd-modules-load[208]: Inserted module 'overlay' Sep 16 05:03:33.667457 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 05:03:33.668332 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 05:03:33.673758 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 05:03:33.686738 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 05:03:33.688851 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 05:03:33.736840 kernel: Bridge firewalling registered Sep 16 05:03:33.694965 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 16 05:03:33.757870 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 05:03:33.777488 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 05:03:33.785505 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:03:33.798330 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 05:03:33.826115 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:03:33.840893 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 05:03:33.859081 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:03:33.890028 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 05:03:33.907639 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:03:33.925646 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 05:03:33.944490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:03:33.957060 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 05:03:33.980734 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 05:03:33.982446 systemd-resolved[234]: Positive Trust Anchors: Sep 16 05:03:33.982459 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 05:03:33.982527 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 05:03:33.987134 systemd-resolved[234]: Defaulting to hostname 'linux'. Sep 16 05:03:34.097739 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 05:03:33.990826 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 05:03:34.000183 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:03:34.163736 kernel: SCSI subsystem initialized Sep 16 05:03:34.175608 kernel: Loading iSCSI transport class v2.0-870. Sep 16 05:03:34.192600 kernel: iscsi: registered transport (tcp) Sep 16 05:03:34.224361 kernel: iscsi: registered transport (qla4xxx) Sep 16 05:03:34.224446 kernel: QLogic iSCSI HBA Driver Sep 16 05:03:34.248013 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 05:03:34.286427 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:03:34.309746 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 05:03:34.373410 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 05:03:34.375327 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 05:03:34.472611 kernel: raid6: avx2x4 gen() 17945 MB/s Sep 16 05:03:34.493596 kernel: raid6: avx2x2 gen() 18043 MB/s Sep 16 05:03:34.519711 kernel: raid6: avx2x1 gen() 13861 MB/s Sep 16 05:03:34.519774 kernel: raid6: using algorithm avx2x2 gen() 18043 MB/s Sep 16 05:03:34.546667 kernel: raid6: .... xor() 18529 MB/s, rmw enabled Sep 16 05:03:34.546743 kernel: raid6: using avx2x2 recovery algorithm Sep 16 05:03:34.575599 kernel: xor: automatically using best checksumming function avx Sep 16 05:03:34.762609 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 05:03:34.770710 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 05:03:34.781996 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:03:34.814601 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 16 05:03:34.823636 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:03:34.844839 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 05:03:34.883373 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Sep 16 05:03:34.916099 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 05:03:34.936799 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 05:03:35.042773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:03:35.058122 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 05:03:35.169584 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 05:03:35.185099 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Sep 16 05:03:35.200238 kernel: scsi host0: Virtio SCSI HBA Sep 16 05:03:35.200528 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 16 05:03:35.247884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 05:03:35.272714 kernel: AES CTR mode by8 optimization enabled Sep 16 05:03:35.248082 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:03:35.259193 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:03:35.311735 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 16 05:03:35.331729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:03:35.372739 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 16 05:03:35.373063 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 16 05:03:35.373293 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 16 05:03:35.373506 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 16 05:03:35.373735 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 16 05:03:35.373954 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 05:03:35.373979 kernel: GPT:17805311 != 25165823 Sep 16 05:03:35.357586 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:03:35.412702 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 05:03:35.412744 kernel: GPT:17805311 != 25165823 Sep 16 05:03:35.412776 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 05:03:35.412798 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 05:03:35.412822 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 16 05:03:35.439702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:03:35.500172 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 05:03:35.530164 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 16 05:03:35.543841 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 16 05:03:35.562660 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 16 05:03:35.578192 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 16 05:03:35.585939 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 16 05:03:35.609023 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 05:03:35.628925 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:03:35.648930 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 05:03:35.668000 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 05:03:35.683942 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 05:03:35.720965 disk-uuid[604]: Primary Header is updated. Sep 16 05:03:35.720965 disk-uuid[604]: Secondary Entries is updated. Sep 16 05:03:35.720965 disk-uuid[604]: Secondary Header is updated. Sep 16 05:03:35.738834 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 05:03:35.762713 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 05:03:35.777592 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 05:03:36.798615 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 05:03:36.800364 disk-uuid[605]: The operation has completed successfully. Sep 16 05:03:36.871127 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 05:03:36.871275 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 05:03:36.923757 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 05:03:36.955959 sh[626]: Success Sep 16 05:03:36.992069 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 05:03:36.992905 kernel: device-mapper: uevent: version 1.0.3 Sep 16 05:03:36.992957 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 05:03:37.018582 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 16 05:03:37.101400 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 05:03:37.105674 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 05:03:37.146920 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 05:03:37.176816 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (638) Sep 16 05:03:37.194267 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 05:03:37.194346 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:03:37.223061 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 05:03:37.223154 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 05:03:37.223179 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 05:03:37.234436 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 05:03:37.242447 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 05:03:37.265821 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 05:03:37.267123 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 05:03:37.291817 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 05:03:37.337591 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (661) Sep 16 05:03:37.355055 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:03:37.355143 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:03:37.374308 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 05:03:37.374399 kernel: BTRFS info (device sda6): turning on async discard Sep 16 05:03:37.374424 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 05:03:37.396412 kernel: BTRFS info (device sda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:03:37.396887 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 05:03:37.408451 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 05:03:37.499388 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 05:03:37.501901 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 05:03:37.614168 systemd-networkd[807]: lo: Link UP Sep 16 05:03:37.614181 systemd-networkd[807]: lo: Gained carrier Sep 16 05:03:37.617987 systemd-networkd[807]: Enumeration completed Sep 16 05:03:37.618548 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:03:37.618580 systemd-networkd[807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 05:03:37.619659 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 05:03:37.623400 systemd-networkd[807]: eth0: Link UP Sep 16 05:03:37.624959 systemd-networkd[807]: eth0: Gained carrier Sep 16 05:03:37.690897 ignition[734]: Ignition 2.22.0 Sep 16 05:03:37.624980 systemd-networkd[807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:03:37.690907 ignition[734]: Stage: fetch-offline Sep 16 05:03:37.641387 systemd-networkd[807]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8.c.flatcar-212911.internal' to 'ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8' Sep 16 05:03:37.690940 ignition[734]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:37.641409 systemd-networkd[807]: eth0: DHCPv4 address 10.128.0.3/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 16 05:03:37.690950 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:03:37.673991 systemd[1]: Reached target network.target - Network. Sep 16 05:03:37.691051 ignition[734]: parsed url from cmdline: "" Sep 16 05:03:37.692893 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 05:03:37.691056 ignition[734]: no config URL provided Sep 16 05:03:37.697580 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 05:03:37.691062 ignition[734]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 05:03:37.756136 unknown[817]: fetched base config from "system" Sep 16 05:03:37.691070 ignition[734]: no config at "/usr/lib/ignition/user.ign" Sep 16 05:03:37.756148 unknown[817]: fetched base config from "system" Sep 16 05:03:37.691078 ignition[734]: failed to fetch config: resource requires networking Sep 16 05:03:37.756158 unknown[817]: fetched user config from "gcp" Sep 16 05:03:37.691308 ignition[734]: Ignition finished successfully Sep 16 05:03:37.759322 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 05:03:37.742455 ignition[817]: Ignition 2.22.0 Sep 16 05:03:37.784640 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 05:03:37.742464 ignition[817]: Stage: fetch Sep 16 05:03:37.845705 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 05:03:37.742709 ignition[817]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:37.859057 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 05:03:37.742730 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:03:37.917491 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 05:03:37.742871 ignition[817]: parsed url from cmdline: "" Sep 16 05:03:37.922481 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 05:03:37.742878 ignition[817]: no config URL provided Sep 16 05:03:37.934957 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 05:03:37.742886 ignition[817]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 05:03:37.965883 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 05:03:37.742901 ignition[817]: no config at "/usr/lib/ignition/user.ign" Sep 16 05:03:37.975022 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 05:03:37.743006 ignition[817]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 16 05:03:37.990955 systemd[1]: Reached target basic.target - Basic System. Sep 16 05:03:37.747293 ignition[817]: GET result: OK Sep 16 05:03:38.008405 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 05:03:37.747401 ignition[817]: parsing config with SHA512: f524f41f8d9a9830673ece35278a2d3cc63a841e79fda3d4dbd44fa78337c5fbcda040dd7ed9faee249e7a733c5fb69a435709b7eeed4689a27cd4f8e4a1fd9d Sep 16 05:03:37.756652 ignition[817]: fetch: fetch complete Sep 16 05:03:37.756659 ignition[817]: fetch: fetch passed Sep 16 05:03:37.756725 ignition[817]: Ignition finished successfully Sep 16 05:03:37.842307 ignition[824]: Ignition 2.22.0 Sep 16 05:03:37.842324 ignition[824]: Stage: kargs Sep 16 05:03:37.842480 ignition[824]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:37.842491 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:03:37.843380 ignition[824]: kargs: kargs passed Sep 16 05:03:37.843435 ignition[824]: Ignition finished successfully Sep 16 05:03:37.914188 ignition[831]: Ignition 2.22.0 Sep 16 05:03:37.914202 ignition[831]: Stage: disks Sep 16 05:03:37.914425 ignition[831]: no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:37.914439 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:03:37.915921 ignition[831]: disks: disks passed Sep 16 05:03:37.915991 ignition[831]: Ignition finished successfully Sep 16 05:03:38.079795 systemd-fsck[840]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 16 05:03:38.213624 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 05:03:38.215491 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 05:03:38.422028 kernel: EXT4-fs (sda9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 05:03:38.421913 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 05:03:38.430449 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 05:03:38.448074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 05:03:38.464530 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 05:03:38.521754 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (848) Sep 16 05:03:38.521798 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:03:38.521820 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:03:38.478293 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 05:03:38.554742 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 05:03:38.554785 kernel: BTRFS info (device sda6): turning on async discard Sep 16 05:03:38.554810 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 05:03:38.478380 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 05:03:38.478426 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 05:03:38.546759 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 05:03:38.563397 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 05:03:38.587593 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 05:03:38.723026 initrd-setup-root[872]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 05:03:38.732650 initrd-setup-root[879]: cut: /sysroot/etc/group: No such file or directory Sep 16 05:03:38.742471 initrd-setup-root[886]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 05:03:38.752713 initrd-setup-root[893]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 05:03:38.897266 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 05:03:38.916414 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 05:03:38.924731 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 05:03:38.949843 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 05:03:38.965796 kernel: BTRFS info (device sda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:03:38.998684 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 05:03:39.006750 ignition[960]: INFO : Ignition 2.22.0 Sep 16 05:03:39.006750 ignition[960]: INFO : Stage: mount Sep 16 05:03:39.006750 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:39.006750 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:03:39.006750 ignition[960]: INFO : mount: mount passed Sep 16 05:03:39.057729 ignition[960]: INFO : Ignition finished successfully Sep 16 05:03:39.013273 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 05:03:39.027728 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 05:03:39.175793 systemd-networkd[807]: eth0: Gained IPv6LL Sep 16 05:03:39.424048 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 05:03:39.467615 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (972) Sep 16 05:03:39.477609 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 05:03:39.477691 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 05:03:39.502581 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 05:03:39.502675 kernel: BTRFS info (device sda6): turning on async discard Sep 16 05:03:39.502700 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 05:03:39.511213 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 05:03:39.557039 ignition[989]: INFO : Ignition 2.22.0 Sep 16 05:03:39.557039 ignition[989]: INFO : Stage: files Sep 16 05:03:39.570753 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:39.570753 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:03:39.570753 ignition[989]: DEBUG : files: compiled without relabeling support, skipping Sep 16 05:03:39.570753 ignition[989]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 05:03:39.570753 ignition[989]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 05:03:39.570753 ignition[989]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 05:03:39.570753 ignition[989]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 05:03:39.570753 ignition[989]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 05:03:39.565158 unknown[989]: wrote ssh authorized keys file for user: core Sep 16 05:03:39.662764 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 16 05:03:39.662764 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 16 05:03:39.699058 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 05:03:40.926961 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 16 05:03:40.926961 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 05:03:40.956733 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 16 05:03:41.137271 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 16 05:03:41.284993 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 05:03:41.284993 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:03:41.313800 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 16 05:03:41.580790 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 16 05:03:41.942832 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 05:03:41.942832 ignition[989]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 16 05:03:41.960849 ignition[989]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 05:03:41.960849 ignition[989]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 05:03:41.960849 ignition[989]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 16 05:03:41.960849 ignition[989]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 16 05:03:41.960849 ignition[989]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 05:03:41.960849 ignition[989]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 05:03:41.960849 ignition[989]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 05:03:41.960849 ignition[989]: INFO : files: files passed Sep 16 05:03:41.960849 ignition[989]: INFO : Ignition finished successfully Sep 16 05:03:41.950534 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 05:03:41.962783 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 05:03:41.996041 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 05:03:42.050147 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 05:03:42.179724 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:03:42.179724 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:03:42.050272 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 05:03:42.215743 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:03:42.101177 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 05:03:42.117024 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 05:03:42.133867 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 05:03:42.220159 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 05:03:42.220293 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 05:03:42.238962 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 05:03:42.257729 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 05:03:42.274828 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 05:03:42.276009 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 05:03:42.344190 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 05:03:42.365767 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 05:03:42.414208 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:03:42.432900 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:03:42.433349 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 05:03:42.452124 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 05:03:42.452341 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 05:03:42.485143 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 05:03:42.495082 systemd[1]: Stopped target basic.target - Basic System. Sep 16 05:03:42.511096 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 05:03:42.525119 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 05:03:42.542076 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 05:03:42.560082 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 05:03:42.577121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 05:03:42.594115 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 05:03:42.610123 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 05:03:42.630134 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 05:03:42.646111 systemd[1]: Stopped target swap.target - Swaps. Sep 16 05:03:42.662127 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 05:03:42.662339 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 05:03:42.699814 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:03:42.700213 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:03:42.717053 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 05:03:42.717214 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:03:42.736053 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 05:03:42.736253 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 05:03:42.773059 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 05:03:42.773300 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 05:03:42.782126 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 05:03:42.782306 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 05:03:42.802413 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 05:03:42.818931 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 05:03:42.879065 ignition[1042]: INFO : Ignition 2.22.0 Sep 16 05:03:42.879065 ignition[1042]: INFO : Stage: umount Sep 16 05:03:42.879065 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:03:42.879065 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 16 05:03:42.850741 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 05:03:42.941758 ignition[1042]: INFO : umount: umount passed Sep 16 05:03:42.941758 ignition[1042]: INFO : Ignition finished successfully Sep 16 05:03:42.851030 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:03:42.862420 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 05:03:42.862917 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 05:03:42.917624 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 05:03:42.919043 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 05:03:42.919174 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 05:03:42.933495 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 05:03:42.933659 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 05:03:42.944783 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 05:03:42.944908 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 05:03:42.967987 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 05:03:42.968053 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 05:03:42.981806 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 05:03:42.981903 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 05:03:42.999917 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 05:03:42.999993 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 05:03:43.008949 systemd[1]: Stopped target network.target - Network. Sep 16 05:03:43.024933 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 05:03:43.025022 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 05:03:43.038970 systemd[1]: Stopped target paths.target - Path Units. Sep 16 05:03:43.055914 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 05:03:43.060657 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:03:43.069895 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 05:03:43.086988 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 05:03:43.100985 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 05:03:43.101067 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 05:03:43.116998 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 05:03:43.117076 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 05:03:43.142929 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 05:03:43.143027 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 05:03:43.152032 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 05:03:43.152106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 05:03:43.167985 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 05:03:43.168068 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 05:03:43.184211 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 05:03:43.208909 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 05:03:43.227333 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 05:03:43.227472 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 05:03:43.238080 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 05:03:43.238345 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 05:03:43.238509 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 05:03:43.261254 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 05:03:43.263148 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 05:03:43.268907 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 05:03:43.268960 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:03:43.287041 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 05:03:43.310674 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 05:03:43.310810 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 05:03:43.328865 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 05:03:43.328972 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:03:43.365023 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 05:03:43.365101 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 05:03:43.380834 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 05:03:43.380948 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:03:43.398033 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:03:43.415313 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 05:03:43.415433 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:03:43.416071 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 05:03:43.847759 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 16 05:03:43.416236 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:03:43.432106 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 05:03:43.432244 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 05:03:43.455907 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 05:03:43.455963 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:03:43.464954 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 05:03:43.465037 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 05:03:43.516755 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 05:03:43.516998 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 05:03:43.544066 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 05:03:43.544187 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 05:03:43.574101 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 05:03:43.597704 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 05:03:43.597831 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:03:43.619019 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 05:03:43.619091 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:03:43.628112 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 16 05:03:43.628180 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 05:03:43.645116 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 05:03:43.645184 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:03:43.673889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 05:03:43.673972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:03:43.694610 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 05:03:43.694684 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 16 05:03:43.694727 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 05:03:43.694774 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:03:43.695300 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 05:03:43.695424 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 05:03:43.702212 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 05:03:43.702321 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 05:03:43.730982 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 05:03:43.740990 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 05:03:43.795087 systemd[1]: Switching root. Sep 16 05:03:44.164731 systemd-journald[207]: Journal stopped Sep 16 05:03:46.793738 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 05:03:46.793797 kernel: SELinux: policy capability open_perms=1 Sep 16 05:03:46.793822 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 05:03:46.793843 kernel: SELinux: policy capability always_check_network=0 Sep 16 05:03:46.793864 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 05:03:46.793886 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 05:03:46.793916 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 05:03:46.793938 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 05:03:46.793960 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 05:03:46.793982 kernel: audit: type=1403 audit(1757999024.592:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 05:03:46.794011 systemd[1]: Successfully loaded SELinux policy in 110.982ms. Sep 16 05:03:46.794036 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.138ms. Sep 16 05:03:46.794062 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 05:03:46.794091 systemd[1]: Detected virtualization google. Sep 16 05:03:46.794116 systemd[1]: Detected architecture x86-64. Sep 16 05:03:46.794140 systemd[1]: Detected first boot. Sep 16 05:03:46.794165 systemd[1]: Initializing machine ID from random generator. Sep 16 05:03:46.794190 zram_generator::config[1085]: No configuration found. Sep 16 05:03:46.794220 kernel: Guest personality initialized and is inactive Sep 16 05:03:46.794243 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 05:03:46.794265 kernel: Initialized host personality Sep 16 05:03:46.794295 kernel: NET: Registered PF_VSOCK protocol family Sep 16 05:03:46.794320 systemd[1]: Populated /etc with preset unit settings. Sep 16 05:03:46.794346 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 05:03:46.794369 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 05:03:46.794398 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 05:03:46.794423 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 05:03:46.794447 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 05:03:46.794472 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 05:03:46.794498 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 05:03:46.794523 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 05:03:46.794550 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 05:03:46.794592 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 05:03:46.794617 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 05:03:46.794641 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 05:03:46.794665 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:03:46.794691 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:03:46.794716 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 05:03:46.794740 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 05:03:46.794766 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 05:03:46.794798 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 05:03:46.794828 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 05:03:46.794854 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:03:46.794880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:03:46.794905 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 05:03:46.794930 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 05:03:46.794956 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 05:03:46.794982 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 05:03:46.795013 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:03:46.795038 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 05:03:46.795064 systemd[1]: Reached target slices.target - Slice Units. Sep 16 05:03:46.795090 systemd[1]: Reached target swap.target - Swaps. Sep 16 05:03:46.795117 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 05:03:46.795142 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 05:03:46.795167 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 05:03:46.795198 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:03:46.795224 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 05:03:46.795250 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:03:46.795296 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 05:03:46.795322 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 05:03:46.795348 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 05:03:46.795379 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 05:03:46.795405 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:46.795431 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 05:03:46.795456 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 05:03:46.795482 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 05:03:46.795509 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 05:03:46.795534 systemd[1]: Reached target machines.target - Containers. Sep 16 05:03:46.795574 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 05:03:46.795605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:03:46.795631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 05:03:46.795656 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 05:03:46.795682 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:03:46.795709 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 05:03:46.795735 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:03:46.795761 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 05:03:46.795786 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:03:46.795813 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 05:03:46.795843 kernel: fuse: init (API version 7.41) Sep 16 05:03:46.795867 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 05:03:46.795894 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 05:03:46.795919 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 05:03:46.795945 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 05:03:46.795970 kernel: ACPI: bus type drm_connector registered Sep 16 05:03:46.795996 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:03:46.796022 kernel: loop: module loaded Sep 16 05:03:46.796051 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 05:03:46.796076 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 05:03:46.796102 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 05:03:46.796128 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 05:03:46.796195 systemd-journald[1173]: Collecting audit messages is disabled. Sep 16 05:03:46.796247 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 05:03:46.796279 systemd-journald[1173]: Journal started Sep 16 05:03:46.796324 systemd-journald[1173]: Runtime Journal (/run/log/journal/5d66908409214cc5908a5859b5a695be) is 8M, max 148.9M, 140.9M free. Sep 16 05:03:45.569527 systemd[1]: Queued start job for default target multi-user.target. Sep 16 05:03:45.590442 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 16 05:03:45.591115 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 05:03:46.829600 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 05:03:46.846580 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 05:03:46.852593 systemd[1]: Stopped verity-setup.service. Sep 16 05:03:46.875587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:46.889622 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 05:03:46.899503 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 05:03:46.908965 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 05:03:46.919951 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 05:03:46.928973 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 05:03:46.937931 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 05:03:46.946925 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 05:03:46.956317 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 05:03:46.967252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:03:46.978227 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 05:03:46.978779 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 05:03:46.989148 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:03:46.989453 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:03:47.000127 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 05:03:47.000413 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 05:03:47.010140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:03:47.010475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:03:47.021120 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 05:03:47.021413 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 05:03:47.031092 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:03:47.031397 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:03:47.041108 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 05:03:47.052123 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:03:47.063100 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 05:03:47.074060 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 05:03:47.085142 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:03:47.109266 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 05:03:47.121401 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 05:03:47.138684 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 05:03:47.147752 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 05:03:47.147997 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 05:03:47.158066 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 05:03:47.170090 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 05:03:47.178951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:03:47.190848 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 05:03:47.201980 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 05:03:47.213760 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 05:03:47.216598 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 05:03:47.225761 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 05:03:47.229695 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:03:47.243901 systemd-journald[1173]: Time spent on flushing to /var/log/journal/5d66908409214cc5908a5859b5a695be is 43.492ms for 965 entries. Sep 16 05:03:47.243901 systemd-journald[1173]: System Journal (/var/log/journal/5d66908409214cc5908a5859b5a695be) is 8M, max 584.8M, 576.8M free. Sep 16 05:03:47.337207 systemd-journald[1173]: Received client request to flush runtime journal. Sep 16 05:03:47.243680 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 05:03:47.269747 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 05:03:47.285956 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 05:03:47.298593 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 05:03:47.309206 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 05:03:47.325302 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 05:03:47.339715 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 05:03:47.351176 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 05:03:47.364589 kernel: loop0: detected capacity change from 0 to 110984 Sep 16 05:03:47.375570 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:03:47.408427 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 05:03:47.412522 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 16 05:03:47.412596 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 05:03:47.413313 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 16 05:03:47.425365 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 05:03:47.439026 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 05:03:47.447096 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 05:03:47.476952 kernel: loop1: detected capacity change from 0 to 50736 Sep 16 05:03:47.541657 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 05:03:47.549744 kernel: loop2: detected capacity change from 0 to 229808 Sep 16 05:03:47.559914 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 05:03:47.632166 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Sep 16 05:03:47.635233 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Sep 16 05:03:47.646688 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:03:47.681784 kernel: loop3: detected capacity change from 0 to 128016 Sep 16 05:03:47.774604 kernel: loop4: detected capacity change from 0 to 110984 Sep 16 05:03:47.822309 kernel: loop5: detected capacity change from 0 to 50736 Sep 16 05:03:47.857959 kernel: loop6: detected capacity change from 0 to 229808 Sep 16 05:03:47.918617 kernel: loop7: detected capacity change from 0 to 128016 Sep 16 05:03:47.958362 (sd-merge)[1234]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 16 05:03:47.959798 (sd-merge)[1234]: Merged extensions into '/usr'. Sep 16 05:03:47.972531 systemd[1]: Reload requested from client PID 1208 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 05:03:47.972985 systemd[1]: Reloading... Sep 16 05:03:48.112628 zram_generator::config[1256]: No configuration found. Sep 16 05:03:48.361381 ldconfig[1203]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 05:03:48.588309 systemd[1]: Reloading finished in 614 ms. Sep 16 05:03:48.624514 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 05:03:48.634356 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 05:03:48.655784 systemd[1]: Starting ensure-sysext.service... Sep 16 05:03:48.674796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 05:03:48.708026 systemd[1]: Reload requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Sep 16 05:03:48.708057 systemd[1]: Reloading... Sep 16 05:03:48.725457 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 05:03:48.726018 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 05:03:48.726652 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 05:03:48.727323 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 05:03:48.729348 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 05:03:48.730090 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Sep 16 05:03:48.730369 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Sep 16 05:03:48.738974 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 05:03:48.738992 systemd-tmpfiles[1301]: Skipping /boot Sep 16 05:03:48.755596 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 05:03:48.755795 systemd-tmpfiles[1301]: Skipping /boot Sep 16 05:03:48.816623 zram_generator::config[1328]: No configuration found. Sep 16 05:03:49.056609 systemd[1]: Reloading finished in 347 ms. Sep 16 05:03:49.074394 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 05:03:49.096238 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:03:49.115732 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:03:49.129264 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 05:03:49.149857 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 05:03:49.166712 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 05:03:49.179163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:03:49.193764 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 05:03:49.203754 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:49.204660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:03:49.213284 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:03:49.226344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:03:49.239861 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:03:49.248831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:03:49.249061 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:03:49.255368 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 05:03:49.263658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:49.267985 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:03:49.268641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:03:49.275449 augenrules[1399]: No rules Sep 16 05:03:49.280446 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:03:49.281630 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:03:49.291552 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 05:03:49.303493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:03:49.304018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:03:49.309049 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Sep 16 05:03:49.314916 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:03:49.315196 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:03:49.350125 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:49.352000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:03:49.355451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:03:49.371712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:03:49.385050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:03:49.393808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:03:49.394162 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:03:49.398018 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 05:03:49.398131 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:49.405507 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 05:03:49.416669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:03:49.427771 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 05:03:49.440033 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 05:03:49.452149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:03:49.452669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:03:49.465748 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:03:49.466104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:03:49.477500 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:03:49.478628 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:03:49.488620 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 05:03:49.529474 systemd-resolved[1380]: Positive Trust Anchors: Sep 16 05:03:49.529502 systemd-resolved[1380]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 05:03:49.529604 systemd-resolved[1380]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 05:03:49.538175 systemd-resolved[1380]: Defaulting to hostname 'linux'. Sep 16 05:03:49.540422 systemd[1]: Finished ensure-sysext.service. Sep 16 05:03:49.547912 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 05:03:49.563529 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:03:49.576841 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:49.580876 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:03:49.589437 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:03:49.592851 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:03:49.605867 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 05:03:49.617902 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:03:49.634932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:03:49.646515 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 16 05:03:49.653876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:03:49.653954 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:03:49.658823 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 05:03:49.667768 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 05:03:49.676749 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 05:03:49.676796 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:03:49.678960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:03:49.682357 augenrules[1454]: /sbin/augenrules: No change Sep 16 05:03:49.684759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:03:49.696349 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 05:03:49.696699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 05:03:49.706177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:03:49.706478 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:03:49.716974 augenrules[1477]: No rules Sep 16 05:03:49.717258 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:03:49.717552 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:03:49.727251 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:03:49.727635 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:03:49.756624 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Sep 16 05:03:49.756926 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 16 05:03:49.766727 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 05:03:49.766799 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 05:03:49.775856 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 05:03:49.786767 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 05:03:49.796817 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 05:03:49.808024 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 05:03:49.816929 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 05:03:49.828185 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 05:03:49.838742 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 05:03:49.839034 systemd[1]: Reached target paths.target - Path Units. Sep 16 05:03:49.846767 systemd[1]: Reached target timers.target - Timer Units. Sep 16 05:03:49.858218 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 05:03:49.872491 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 05:03:49.887682 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 05:03:49.898054 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 05:03:49.908716 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 05:03:49.918214 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 05:03:49.929874 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 05:03:49.936680 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 16 05:03:49.944500 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 05:03:49.958231 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 05:03:49.967612 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 16 05:03:50.000577 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 16 05:03:50.009644 systemd-networkd[1466]: lo: Link UP Sep 16 05:03:50.010088 systemd-networkd[1466]: lo: Gained carrier Sep 16 05:03:50.029645 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 16 05:03:50.039438 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 16 05:03:50.044185 systemd-networkd[1466]: Enumeration completed Sep 16 05:03:50.045800 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:03:50.048614 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 05:03:50.049475 systemd-networkd[1466]: eth0: Link UP Sep 16 05:03:50.051266 systemd-networkd[1466]: eth0: Gained carrier Sep 16 05:03:50.051297 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:03:50.053063 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 05:03:50.065586 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 05:03:50.070162 systemd-networkd[1466]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8.c.flatcar-212911.internal' to 'ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8' Sep 16 05:03:50.070195 systemd-networkd[1466]: eth0: DHCPv4 address 10.128.0.3/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 16 05:03:50.070864 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 05:03:50.097921 systemd[1]: Reached target network.target - Network. Sep 16 05:03:50.104580 kernel: ACPI: button: Power Button [PWRF] Sep 16 05:03:50.112910 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 05:03:50.126680 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 05:03:50.147335 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 05:03:50.174677 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 05:03:50.201795 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 16 05:03:50.221600 kernel: EDAC MC: Ver: 3.0.0 Sep 16 05:03:50.234506 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 16 05:03:50.234609 kernel: ACPI: button: Sleep Button [SLPF] Sep 16 05:03:50.237730 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 05:03:50.247092 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 05:03:50.257931 systemd[1]: Reached target basic.target - Basic System. Sep 16 05:03:50.265963 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 05:03:50.266158 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 05:03:50.268816 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 05:03:50.282937 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 05:03:50.296346 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 05:03:50.308639 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 05:03:50.321595 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 05:03:50.326997 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 05:03:50.331353 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 05:03:50.339738 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 05:03:50.354846 jq[1540]: false Sep 16 05:03:50.352109 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 05:03:50.357005 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 05:03:50.378023 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 05:03:50.409150 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 05:03:50.424702 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 05:03:50.457380 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 05:03:50.468171 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 16 05:03:50.470309 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 05:03:50.473060 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 05:03:50.476456 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Refreshing passwd entry cache Sep 16 05:03:50.478922 oslogin_cache_refresh[1542]: Refreshing passwd entry cache Sep 16 05:03:50.484342 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 05:03:50.489603 coreos-metadata[1537]: Sep 16 05:03:50.488 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 16 05:03:50.500828 coreos-metadata[1537]: Sep 16 05:03:50.491 INFO Fetch successful Sep 16 05:03:50.500828 coreos-metadata[1537]: Sep 16 05:03:50.491 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 16 05:03:50.500828 coreos-metadata[1537]: Sep 16 05:03:50.492 INFO Fetch successful Sep 16 05:03:50.500828 coreos-metadata[1537]: Sep 16 05:03:50.493 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 16 05:03:50.500828 coreos-metadata[1537]: Sep 16 05:03:50.494 INFO Fetch successful Sep 16 05:03:50.500828 coreos-metadata[1537]: Sep 16 05:03:50.500 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 16 05:03:50.498973 oslogin_cache_refresh[1542]: Failure getting users, quitting Sep 16 05:03:50.501263 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Failure getting users, quitting Sep 16 05:03:50.501263 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 05:03:50.501263 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Refreshing group entry cache Sep 16 05:03:50.499017 oslogin_cache_refresh[1542]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 05:03:50.499087 oslogin_cache_refresh[1542]: Refreshing group entry cache Sep 16 05:03:50.504450 coreos-metadata[1537]: Sep 16 05:03:50.504 INFO Fetch successful Sep 16 05:03:50.514474 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 05:03:50.519298 extend-filesystems[1541]: Found /dev/sda6 Sep 16 05:03:50.517984 oslogin_cache_refresh[1542]: Failure getting groups, quitting Sep 16 05:03:50.536990 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Failure getting groups, quitting Sep 16 05:03:50.536990 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 05:03:50.527069 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 05:03:50.537220 jq[1561]: true Sep 16 05:03:50.518021 oslogin_cache_refresh[1542]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 05:03:50.528238 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 05:03:50.529049 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 05:03:50.541027 extend-filesystems[1541]: Found /dev/sda9 Sep 16 05:03:50.532854 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 05:03:50.557357 extend-filesystems[1541]: Checking size of /dev/sda9 Sep 16 05:03:50.556741 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 05:03:50.557512 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 05:03:50.572711 extend-filesystems[1541]: Resized partition /dev/sda9 Sep 16 05:03:50.579269 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 05:03:50.581472 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 05:03:50.588390 extend-filesystems[1574]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 05:03:50.634589 jq[1577]: true Sep 16 05:03:50.666586 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 16 05:03:50.683602 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 16 05:03:50.697515 update_engine[1557]: I20250916 05:03:50.680603 1557 main.cc:92] Flatcar Update Engine starting Sep 16 05:03:50.699268 (ntainerd)[1578]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 05:03:50.704413 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 05:03:50.720287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:03:50.726119 extend-filesystems[1574]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 16 05:03:50.726119 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 16 05:03:50.726119 extend-filesystems[1574]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 16 05:03:50.773786 extend-filesystems[1541]: Resized filesystem in /dev/sda9 Sep 16 05:03:50.730234 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 05:03:50.732462 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 05:03:50.799203 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 05:03:50.809089 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 05:03:50.832634 tar[1575]: linux-amd64/LICENSE Sep 16 05:03:50.832634 tar[1575]: linux-amd64/helm Sep 16 05:03:50.886838 bash[1614]: Updated "/home/core/.ssh/authorized_keys" Sep 16 05:03:50.894446 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 05:03:50.949142 systemd[1]: Starting sshkeys.service... Sep 16 05:03:51.013258 dbus-daemon[1538]: [system] SELinux support is enabled Sep 16 05:03:51.013725 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 05:03:51.021132 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 05:03:51.021355 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 05:03:51.021519 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 05:03:51.021543 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 05:03:51.042837 ntpd[1544]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:03:51.051188 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:03:51.051188 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:03:51.051188 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: ---------------------------------------------------- Sep 16 05:03:51.051188 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:03:51.051188 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:03:51.051188 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: corporation. Support and training for ntp-4 are Sep 16 05:03:51.051188 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: available at https://www.nwtime.org/support Sep 16 05:03:51.051188 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: ---------------------------------------------------- Sep 16 05:03:51.042927 ntpd[1544]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:03:51.042942 ntpd[1544]: ---------------------------------------------------- Sep 16 05:03:51.042955 ntpd[1544]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:03:51.042968 ntpd[1544]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:03:51.042980 ntpd[1544]: corporation. Support and training for ntp-4 are Sep 16 05:03:51.042993 ntpd[1544]: available at https://www.nwtime.org/support Sep 16 05:03:51.043006 ntpd[1544]: ---------------------------------------------------- Sep 16 05:03:51.058240 dbus-daemon[1538]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1466 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 16 05:03:51.071301 ntpd[1544]: proto: precision = 0.112 usec (-23) Sep 16 05:03:51.073361 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: proto: precision = 0.112 usec (-23) Sep 16 05:03:51.066074 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 16 05:03:51.069463 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 16 05:03:51.075152 update_engine[1557]: I20250916 05:03:51.074270 1557 update_check_scheduler.cc:74] Next update check in 2m54s Sep 16 05:03:51.075957 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 16 05:03:51.083831 ntpd[1544]: basedate set to 2025-09-04 Sep 16 05:03:51.084779 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: basedate set to 2025-09-04 Sep 16 05:03:51.084779 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: gps base set to 2025-09-07 (week 2383) Sep 16 05:03:51.084779 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:03:51.084779 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:03:51.083861 ntpd[1544]: gps base set to 2025-09-07 (week 2383) Sep 16 05:03:51.084024 ntpd[1544]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:03:51.084061 ntpd[1544]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:03:51.104434 kernel: ntpd[1544]: segfault at 24 ip 000056429f67aaeb sp 00007ffd94a71580 error 4 in ntpd[68aeb,56429f618000+80000] likely on CPU 0 (core 0, socket 0) Sep 16 05:03:51.104529 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Sep 16 05:03:51.085689 systemd[1]: Started update-engine.service - Update Engine. Sep 16 05:03:51.105135 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:03:51.105135 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: Listen normally on 3 eth0 10.128.0.3:123 Sep 16 05:03:51.105135 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: Listen normally on 4 lo [::1]:123 Sep 16 05:03:51.105135 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: bind(21) AF_INET6 [fe80::4001:aff:fe80:3%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 05:03:51.105135 ntpd[1544]: 16 Sep 05:03:51 ntpd[1544]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:3%2]:123 Sep 16 05:03:51.086941 ntpd[1544]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:03:51.105046 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 05:03:51.086986 ntpd[1544]: Listen normally on 3 eth0 10.128.0.3:123 Sep 16 05:03:51.087028 ntpd[1544]: Listen normally on 4 lo [::1]:123 Sep 16 05:03:51.087070 ntpd[1544]: bind(21) AF_INET6 [fe80::4001:aff:fe80:3%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 05:03:51.087098 ntpd[1544]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:3%2]:123 Sep 16 05:03:51.213897 systemd-coredump[1624]: Process 1544 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Sep 16 05:03:51.221614 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Sep 16 05:03:51.226687 systemd[1]: Started systemd-coredump@0-1624-0.service - Process Core Dump (PID 1624/UID 0). Sep 16 05:03:51.343183 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:03:51.365879 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 05:03:51.396453 coreos-metadata[1620]: Sep 16 05:03:51.395 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 16 05:03:51.396453 coreos-metadata[1620]: Sep 16 05:03:51.396 INFO Fetch failed with 404: resource not found Sep 16 05:03:51.396453 coreos-metadata[1620]: Sep 16 05:03:51.396 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 16 05:03:51.396453 coreos-metadata[1620]: Sep 16 05:03:51.396 INFO Fetch successful Sep 16 05:03:51.396453 coreos-metadata[1620]: Sep 16 05:03:51.396 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 16 05:03:51.396453 coreos-metadata[1620]: Sep 16 05:03:51.396 INFO Fetch failed with 404: resource not found Sep 16 05:03:51.396453 coreos-metadata[1620]: Sep 16 05:03:51.396 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 16 05:03:51.396453 coreos-metadata[1620]: Sep 16 05:03:51.396 INFO Fetch failed with 404: resource not found Sep 16 05:03:51.396453 coreos-metadata[1620]: Sep 16 05:03:51.396 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 16 05:03:51.396453 coreos-metadata[1620]: Sep 16 05:03:51.396 INFO Fetch successful Sep 16 05:03:51.398800 unknown[1620]: wrote ssh authorized keys file for user: core Sep 16 05:03:51.448395 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 05:03:51.469595 update-ssh-keys[1641]: Updated "/home/core/.ssh/authorized_keys" Sep 16 05:03:51.467694 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 05:03:51.480978 systemd[1]: Started sshd@0-10.128.0.3:22-139.178.68.195:37620.service - OpenSSH per-connection server daemon (139.178.68.195:37620). Sep 16 05:03:51.491965 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 16 05:03:51.519672 systemd[1]: Finished sshkeys.service. Sep 16 05:03:51.536133 systemd-logind[1555]: Watching system buttons on /dev/input/event2 (Power Button) Sep 16 05:03:51.536174 systemd-logind[1555]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 16 05:03:51.536205 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 05:03:51.536509 systemd-logind[1555]: New seat seat0. Sep 16 05:03:51.574924 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 05:03:51.633243 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 05:03:51.634651 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 05:03:51.649300 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 05:03:51.653060 dbus-daemon[1538]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 16 05:03:51.657727 dbus-daemon[1538]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1621 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 16 05:03:51.660063 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 16 05:03:51.676157 systemd[1]: Starting polkit.service - Authorization Manager... Sep 16 05:03:51.699811 locksmithd[1622]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 05:03:51.726427 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 05:03:51.743108 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 05:03:51.754472 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 05:03:51.764990 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 05:03:51.847738 systemd-networkd[1466]: eth0: Gained IPv6LL Sep 16 05:03:51.861337 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 05:03:51.873255 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 05:03:51.874443 containerd[1578]: time="2025-09-16T05:03:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 05:03:51.876623 containerd[1578]: time="2025-09-16T05:03:51.876233504Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 05:03:51.878431 systemd-coredump[1628]: Process 1544 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1544: #0 0x000056429f67aaeb n/a (ntpd + 0x68aeb) #1 0x000056429f623cdf n/a (ntpd + 0x11cdf) #2 0x000056429f624575 n/a (ntpd + 0x12575) #3 0x000056429f61fd8a n/a (ntpd + 0xdd8a) #4 0x000056429f6215d3 n/a (ntpd + 0xf5d3) #5 0x000056429f629fd1 n/a (ntpd + 0x17fd1) #6 0x000056429f61ac2d n/a (ntpd + 0x8c2d) #7 0x00007f861a98816c n/a (libc.so.6 + 0x2716c) #8 0x00007f861a988229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000056429f61ac55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Sep 16 05:03:51.887076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:03:51.900706 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 05:03:51.915776 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 16 05:03:51.924868 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Sep 16 05:03:51.925101 systemd[1]: ntpd.service: Failed with result 'core-dump'. Sep 16 05:03:51.931312 systemd[1]: systemd-coredump@0-1624-0.service: Deactivated successfully. Sep 16 05:03:51.976249 init.sh[1669]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 16 05:03:51.979121 init.sh[1669]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 16 05:03:51.982140 init.sh[1669]: + /usr/bin/google_instance_setup Sep 16 05:03:51.989316 containerd[1578]: time="2025-09-16T05:03:51.987835899Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.729µs" Sep 16 05:03:51.989316 containerd[1578]: time="2025-09-16T05:03:51.987887463Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 05:03:51.989316 containerd[1578]: time="2025-09-16T05:03:51.987919668Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 05:03:51.989316 containerd[1578]: time="2025-09-16T05:03:51.988119582Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 05:03:51.989316 containerd[1578]: time="2025-09-16T05:03:51.988148120Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 05:03:51.989316 containerd[1578]: time="2025-09-16T05:03:51.988187792Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 05:03:51.989316 containerd[1578]: time="2025-09-16T05:03:51.988271638Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 05:03:51.989316 containerd[1578]: time="2025-09-16T05:03:51.988291300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 05:03:51.992418 containerd[1578]: time="2025-09-16T05:03:51.990584571Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 05:03:51.992418 containerd[1578]: time="2025-09-16T05:03:51.991442129Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 05:03:51.992418 containerd[1578]: time="2025-09-16T05:03:51.991473914Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 05:03:51.992418 containerd[1578]: time="2025-09-16T05:03:51.991490780Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 05:03:51.998377 containerd[1578]: time="2025-09-16T05:03:51.993279297Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 05:03:51.998377 containerd[1578]: time="2025-09-16T05:03:51.996720009Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 05:03:51.998377 containerd[1578]: time="2025-09-16T05:03:51.996785181Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 05:03:51.998377 containerd[1578]: time="2025-09-16T05:03:51.996805197Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 05:03:51.998377 containerd[1578]: time="2025-09-16T05:03:51.996869116Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 05:03:51.998377 containerd[1578]: time="2025-09-16T05:03:51.997435569Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 05:03:51.998377 containerd[1578]: time="2025-09-16T05:03:51.997545306Z" level=info msg="metadata content store policy set" policy=shared Sep 16 05:03:51.995248 polkitd[1658]: Started polkitd version 126 Sep 16 05:03:52.007884 containerd[1578]: time="2025-09-16T05:03:52.007631748Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 05:03:52.007884 containerd[1578]: time="2025-09-16T05:03:52.007696557Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 05:03:52.007884 containerd[1578]: time="2025-09-16T05:03:52.007719025Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 05:03:52.007884 containerd[1578]: time="2025-09-16T05:03:52.007738844Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 05:03:52.007884 containerd[1578]: time="2025-09-16T05:03:52.007758408Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 05:03:52.007884 containerd[1578]: time="2025-09-16T05:03:52.007776560Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 05:03:52.007884 containerd[1578]: time="2025-09-16T05:03:52.007797126Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 05:03:52.007884 containerd[1578]: time="2025-09-16T05:03:52.007818309Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 05:03:52.007884 containerd[1578]: time="2025-09-16T05:03:52.007836150Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 05:03:52.007884 containerd[1578]: time="2025-09-16T05:03:52.007852995Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 05:03:52.007884 containerd[1578]: time="2025-09-16T05:03:52.007869624Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.007892146Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.008049046Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.008090477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.008117992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.008142561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.008162476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.008179667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.008198387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.008215785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.010470147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.010500256Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.010521689Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.010645284Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.010671144Z" level=info msg="Start snapshots syncer" Sep 16 05:03:52.012167 containerd[1578]: time="2025-09-16T05:03:52.010730009Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 05:03:52.014279 containerd[1578]: time="2025-09-16T05:03:52.011099592Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 05:03:52.014279 containerd[1578]: time="2025-09-16T05:03:52.011186287Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011268104Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011409846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011442985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011462652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011480585Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011529307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011620456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011643660Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011681367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011700653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011720169Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011754083Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011775417Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 05:03:52.014467 containerd[1578]: time="2025-09-16T05:03:52.011790134Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 05:03:52.017114 containerd[1578]: time="2025-09-16T05:03:52.011806555Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 05:03:52.017114 containerd[1578]: time="2025-09-16T05:03:52.011820933Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 05:03:52.017114 containerd[1578]: time="2025-09-16T05:03:52.011836058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 05:03:52.017114 containerd[1578]: time="2025-09-16T05:03:52.011852262Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 05:03:52.017114 containerd[1578]: time="2025-09-16T05:03:52.011878119Z" level=info msg="runtime interface created" Sep 16 05:03:52.017114 containerd[1578]: time="2025-09-16T05:03:52.011887109Z" level=info msg="created NRI interface" Sep 16 05:03:52.017114 containerd[1578]: time="2025-09-16T05:03:52.011901358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 05:03:52.017114 containerd[1578]: time="2025-09-16T05:03:52.011919028Z" level=info msg="Connect containerd service" Sep 16 05:03:52.017114 containerd[1578]: time="2025-09-16T05:03:52.011960954Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 05:03:52.017114 containerd[1578]: time="2025-09-16T05:03:52.013194529Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 05:03:52.032846 polkitd[1658]: Loading rules from directory /etc/polkit-1/rules.d Sep 16 05:03:52.038768 polkitd[1658]: Loading rules from directory /run/polkit-1/rules.d Sep 16 05:03:52.038864 polkitd[1658]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 05:03:52.039455 polkitd[1658]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 16 05:03:52.039496 polkitd[1658]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 05:03:52.039575 polkitd[1658]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 16 05:03:52.046235 polkitd[1658]: Finished loading, compiling and executing 2 rules Sep 16 05:03:52.046633 systemd[1]: Started polkit.service - Authorization Manager. Sep 16 05:03:52.050085 dbus-daemon[1538]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 16 05:03:52.053665 polkitd[1658]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 16 05:03:52.057624 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Sep 16 05:03:52.071167 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 05:03:52.091622 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 05:03:52.148999 systemd-hostnamed[1621]: Hostname set to (transient) Sep 16 05:03:52.150604 sshd[1645]: Accepted publickey for core from 139.178.68.195 port 37620 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:03:52.154262 systemd-resolved[1380]: System hostname changed to 'ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8'. Sep 16 05:03:52.158314 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:52.187201 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 05:03:52.199132 ntpd[1695]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: ---------------------------------------------------- Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: corporation. Support and training for ntp-4 are Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: available at https://www.nwtime.org/support Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: ---------------------------------------------------- Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: proto: precision = 0.112 usec (-23) Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: basedate set to 2025-09-04 Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: gps base set to 2025-09-07 (week 2383) Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: Listen normally on 3 eth0 10.128.0.3:123 Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: Listen normally on 4 lo [::1]:123 Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:3%2]:123 Sep 16 05:03:52.210839 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: Listening on routing socket on fd #22 for interface updates Sep 16 05:03:52.199217 ntpd[1695]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:03:52.212706 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 05:03:52.199233 ntpd[1695]: ---------------------------------------------------- Sep 16 05:03:52.199246 ntpd[1695]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:03:52.199259 ntpd[1695]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:03:52.199272 ntpd[1695]: corporation. Support and training for ntp-4 are Sep 16 05:03:52.199285 ntpd[1695]: available at https://www.nwtime.org/support Sep 16 05:03:52.199299 ntpd[1695]: ---------------------------------------------------- Sep 16 05:03:52.203462 ntpd[1695]: proto: precision = 0.112 usec (-23) Sep 16 05:03:52.204854 ntpd[1695]: basedate set to 2025-09-04 Sep 16 05:03:52.204876 ntpd[1695]: gps base set to 2025-09-07 (week 2383) Sep 16 05:03:52.204989 ntpd[1695]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:03:52.205028 ntpd[1695]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:03:52.205272 ntpd[1695]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:03:52.205314 ntpd[1695]: Listen normally on 3 eth0 10.128.0.3:123 Sep 16 05:03:52.205359 ntpd[1695]: Listen normally on 4 lo [::1]:123 Sep 16 05:03:52.205403 ntpd[1695]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:3%2]:123 Sep 16 05:03:52.205441 ntpd[1695]: Listening on routing socket on fd #22 for interface updates Sep 16 05:03:52.219352 ntpd[1695]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:03:52.219710 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:03:52.219710 ntpd[1695]: 16 Sep 05:03:52 ntpd[1695]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:03:52.219404 ntpd[1695]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:03:52.250691 systemd-logind[1555]: New session 1 of user core. Sep 16 05:03:52.272894 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 05:03:52.292938 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 05:03:52.341910 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 05:03:52.349615 systemd-logind[1555]: New session c1 of user core. Sep 16 05:03:52.459442 containerd[1578]: time="2025-09-16T05:03:52.459312991Z" level=info msg="Start subscribing containerd event" Sep 16 05:03:52.459698 containerd[1578]: time="2025-09-16T05:03:52.459639734Z" level=info msg="Start recovering state" Sep 16 05:03:52.460001 containerd[1578]: time="2025-09-16T05:03:52.459972583Z" level=info msg="Start event monitor" Sep 16 05:03:52.460111 containerd[1578]: time="2025-09-16T05:03:52.460093757Z" level=info msg="Start cni network conf syncer for default" Sep 16 05:03:52.460212 containerd[1578]: time="2025-09-16T05:03:52.460186574Z" level=info msg="Start streaming server" Sep 16 05:03:52.460313 containerd[1578]: time="2025-09-16T05:03:52.460295456Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 05:03:52.460405 containerd[1578]: time="2025-09-16T05:03:52.460387469Z" level=info msg="runtime interface starting up..." Sep 16 05:03:52.460632 containerd[1578]: time="2025-09-16T05:03:52.460501289Z" level=info msg="starting plugins..." Sep 16 05:03:52.460828 containerd[1578]: time="2025-09-16T05:03:52.460804920Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 05:03:52.465344 containerd[1578]: time="2025-09-16T05:03:52.462443800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 05:03:52.465344 containerd[1578]: time="2025-09-16T05:03:52.462535726Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 05:03:52.465344 containerd[1578]: time="2025-09-16T05:03:52.462652497Z" level=info msg="containerd successfully booted in 0.591548s" Sep 16 05:03:52.464721 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 05:03:52.711205 tar[1575]: linux-amd64/README.md Sep 16 05:03:52.742948 systemd[1709]: Queued start job for default target default.target. Sep 16 05:03:52.744838 systemd[1709]: Created slice app.slice - User Application Slice. Sep 16 05:03:52.744878 systemd[1709]: Reached target paths.target - Paths. Sep 16 05:03:52.744955 systemd[1709]: Reached target timers.target - Timers. Sep 16 05:03:52.748702 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 05:03:52.751810 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 05:03:52.776955 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 05:03:52.777161 systemd[1709]: Reached target sockets.target - Sockets. Sep 16 05:03:52.777230 systemd[1709]: Reached target basic.target - Basic System. Sep 16 05:03:52.777299 systemd[1709]: Reached target default.target - Main User Target. Sep 16 05:03:52.777350 systemd[1709]: Startup finished in 406ms. Sep 16 05:03:52.777913 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 05:03:52.794646 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 05:03:53.037489 systemd[1]: Started sshd@1-10.128.0.3:22-139.178.68.195:33346.service - OpenSSH per-connection server daemon (139.178.68.195:33346). Sep 16 05:03:53.115626 instance-setup[1682]: INFO Running google_set_multiqueue. Sep 16 05:03:53.139216 instance-setup[1682]: INFO Set channels for eth0 to 2. Sep 16 05:03:53.145080 instance-setup[1682]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Sep 16 05:03:53.147108 instance-setup[1682]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Sep 16 05:03:53.147388 instance-setup[1682]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Sep 16 05:03:53.150030 instance-setup[1682]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Sep 16 05:03:53.150388 instance-setup[1682]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Sep 16 05:03:53.153204 instance-setup[1682]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Sep 16 05:03:53.153495 instance-setup[1682]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Sep 16 05:03:53.156162 instance-setup[1682]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Sep 16 05:03:53.165747 instance-setup[1682]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 16 05:03:53.171067 instance-setup[1682]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 16 05:03:53.173269 instance-setup[1682]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 16 05:03:53.173322 instance-setup[1682]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 16 05:03:53.198512 init.sh[1669]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 16 05:03:53.371237 startup-script[1760]: INFO Starting startup scripts. Sep 16 05:03:53.376962 startup-script[1760]: INFO No startup scripts found in metadata. Sep 16 05:03:53.377045 startup-script[1760]: INFO Finished running startup scripts. Sep 16 05:03:53.391173 sshd[1729]: Accepted publickey for core from 139.178.68.195 port 33346 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:03:53.394975 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:53.406700 init.sh[1669]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 16 05:03:53.406700 init.sh[1669]: + daemon_pids=() Sep 16 05:03:53.406700 init.sh[1669]: + for d in accounts clock_skew network Sep 16 05:03:53.406969 init.sh[1669]: + daemon_pids+=($!) Sep 16 05:03:53.406969 init.sh[1669]: + for d in accounts clock_skew network Sep 16 05:03:53.407463 init.sh[1669]: + daemon_pids+=($!) Sep 16 05:03:53.407463 init.sh[1669]: + for d in accounts clock_skew network Sep 16 05:03:53.407597 init.sh[1763]: + /usr/bin/google_accounts_daemon Sep 16 05:03:53.408599 init.sh[1764]: + /usr/bin/google_clock_skew_daemon Sep 16 05:03:53.408918 init.sh[1765]: + /usr/bin/google_network_daemon Sep 16 05:03:53.410673 systemd-logind[1555]: New session 2 of user core. Sep 16 05:03:53.411893 init.sh[1669]: + daemon_pids+=($!) Sep 16 05:03:53.411893 init.sh[1669]: + NOTIFY_SOCKET=/run/systemd/notify Sep 16 05:03:53.411893 init.sh[1669]: + /usr/bin/systemd-notify --ready Sep 16 05:03:53.414760 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 05:03:53.439550 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 16 05:03:53.452390 init.sh[1669]: + wait -n 1763 1764 1765 Sep 16 05:03:53.622287 sshd[1767]: Connection closed by 139.178.68.195 port 33346 Sep 16 05:03:53.620929 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:53.636819 systemd[1]: sshd@1-10.128.0.3:22-139.178.68.195:33346.service: Deactivated successfully. Sep 16 05:03:53.643303 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 05:03:53.650048 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. Sep 16 05:03:53.654813 systemd-logind[1555]: Removed session 2. Sep 16 05:03:53.680953 systemd[1]: Started sshd@2-10.128.0.3:22-139.178.68.195:33358.service - OpenSSH per-connection server daemon (139.178.68.195:33358). Sep 16 05:03:53.870967 google-networking[1765]: INFO Starting Google Networking daemon. Sep 16 05:03:53.892845 google-clock-skew[1764]: INFO Starting Google Clock Skew daemon. Sep 16 05:03:53.907296 google-clock-skew[1764]: INFO Clock drift token has changed: 0. Sep 16 05:03:53.951621 groupadd[1784]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 16 05:03:53.957074 groupadd[1784]: group added to /etc/gshadow: name=google-sudoers Sep 16 05:03:54.007989 groupadd[1784]: new group: name=google-sudoers, GID=1000 Sep 16 05:03:54.036326 google-accounts[1763]: INFO Starting Google Accounts daemon. Sep 16 05:03:54.045409 sshd[1775]: Accepted publickey for core from 139.178.68.195 port 33358 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:03:54.049771 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:03:54.053398 google-accounts[1763]: WARNING OS Login not installed. Sep 16 05:03:54.055480 google-accounts[1763]: INFO Creating a new user account for 0. Sep 16 05:03:54.066142 systemd-logind[1555]: New session 3 of user core. Sep 16 05:03:54.067688 init.sh[1792]: useradd: invalid user name '0': use --badname to ignore Sep 16 05:03:54.068451 google-accounts[1763]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 16 05:03:54.069766 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 05:03:54.000545 systemd-resolved[1380]: Clock change detected. Flushing caches. Sep 16 05:03:54.014383 systemd-journald[1173]: Time jumped backwards, rotating. Sep 16 05:03:54.003805 google-clock-skew[1764]: INFO Synced system time with hardware clock. Sep 16 05:03:54.069889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:03:54.081011 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 05:03:54.085679 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:03:54.090623 systemd[1]: Startup finished in 3.657s (kernel) + 11.708s (initrd) + 9.778s (userspace) = 25.144s. Sep 16 05:03:54.093398 sshd[1794]: Connection closed by 139.178.68.195 port 33358 Sep 16 05:03:54.094181 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Sep 16 05:03:54.106531 systemd[1]: sshd@2-10.128.0.3:22-139.178.68.195:33358.service: Deactivated successfully. Sep 16 05:03:54.109875 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 05:03:54.118165 systemd-logind[1555]: Session 3 logged out. Waiting for processes to exit. Sep 16 05:03:54.129625 systemd-logind[1555]: Removed session 3. Sep 16 05:03:54.968271 kubelet[1802]: E0916 05:03:54.968192 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:03:54.971650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:03:54.971915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:03:54.972764 systemd[1]: kubelet.service: Consumed 1.296s CPU time, 266M memory peak. Sep 16 05:04:04.149815 systemd[1]: Started sshd@3-10.128.0.3:22-139.178.68.195:47900.service - OpenSSH per-connection server daemon (139.178.68.195:47900). Sep 16 05:04:04.466888 sshd[1817]: Accepted publickey for core from 139.178.68.195 port 47900 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:04.468747 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:04.477023 systemd-logind[1555]: New session 4 of user core. Sep 16 05:04:04.488404 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 05:04:04.681685 sshd[1820]: Connection closed by 139.178.68.195 port 47900 Sep 16 05:04:04.682578 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:04.688587 systemd[1]: sshd@3-10.128.0.3:22-139.178.68.195:47900.service: Deactivated successfully. Sep 16 05:04:04.690962 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 05:04:04.692306 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. Sep 16 05:04:04.694175 systemd-logind[1555]: Removed session 4. Sep 16 05:04:04.735641 systemd[1]: Started sshd@4-10.128.0.3:22-139.178.68.195:47910.service - OpenSSH per-connection server daemon (139.178.68.195:47910). Sep 16 05:04:04.983130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 05:04:04.987364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:04:05.046392 sshd[1826]: Accepted publickey for core from 139.178.68.195 port 47910 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:05.048205 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:05.056189 systemd-logind[1555]: New session 5 of user core. Sep 16 05:04:05.061397 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 05:04:05.255882 sshd[1832]: Connection closed by 139.178.68.195 port 47910 Sep 16 05:04:05.257080 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:05.263220 systemd[1]: sshd@4-10.128.0.3:22-139.178.68.195:47910.service: Deactivated successfully. Sep 16 05:04:05.265662 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 05:04:05.266920 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. Sep 16 05:04:05.269327 systemd-logind[1555]: Removed session 5. Sep 16 05:04:05.311701 systemd[1]: Started sshd@5-10.128.0.3:22-139.178.68.195:47912.service - OpenSSH per-connection server daemon (139.178.68.195:47912). Sep 16 05:04:05.365409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:04:05.375851 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:04:05.436688 kubelet[1847]: E0916 05:04:05.436627 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:04:05.442015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:04:05.442274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:04:05.442829 systemd[1]: kubelet.service: Consumed 211ms CPU time, 109M memory peak. Sep 16 05:04:05.627299 sshd[1839]: Accepted publickey for core from 139.178.68.195 port 47912 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:05.629477 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:05.636269 systemd-logind[1555]: New session 6 of user core. Sep 16 05:04:05.643347 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 05:04:05.843913 sshd[1855]: Connection closed by 139.178.68.195 port 47912 Sep 16 05:04:05.844834 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:05.850718 systemd[1]: sshd@5-10.128.0.3:22-139.178.68.195:47912.service: Deactivated successfully. Sep 16 05:04:05.853055 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 05:04:05.854376 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. Sep 16 05:04:05.856478 systemd-logind[1555]: Removed session 6. Sep 16 05:04:05.898644 systemd[1]: Started sshd@6-10.128.0.3:22-139.178.68.195:47916.service - OpenSSH per-connection server daemon (139.178.68.195:47916). Sep 16 05:04:06.219952 sshd[1861]: Accepted publickey for core from 139.178.68.195 port 47916 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:06.221821 sshd-session[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:06.229171 systemd-logind[1555]: New session 7 of user core. Sep 16 05:04:06.236441 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 05:04:06.415460 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 05:04:06.415948 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:04:06.433908 sudo[1865]: pam_unix(sudo:session): session closed for user root Sep 16 05:04:06.476862 sshd[1864]: Connection closed by 139.178.68.195 port 47916 Sep 16 05:04:06.478384 sshd-session[1861]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:06.483684 systemd[1]: sshd@6-10.128.0.3:22-139.178.68.195:47916.service: Deactivated successfully. Sep 16 05:04:06.486026 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 05:04:06.488755 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. Sep 16 05:04:06.490789 systemd-logind[1555]: Removed session 7. Sep 16 05:04:06.530744 systemd[1]: Started sshd@7-10.128.0.3:22-139.178.68.195:47930.service - OpenSSH per-connection server daemon (139.178.68.195:47930). Sep 16 05:04:06.839620 sshd[1871]: Accepted publickey for core from 139.178.68.195 port 47930 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:06.841679 sshd-session[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:06.849164 systemd-logind[1555]: New session 8 of user core. Sep 16 05:04:06.858467 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 05:04:07.018770 sudo[1876]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 05:04:07.019296 sudo[1876]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:04:07.026411 sudo[1876]: pam_unix(sudo:session): session closed for user root Sep 16 05:04:07.040866 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 05:04:07.041378 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:04:07.054667 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:04:07.102932 augenrules[1898]: No rules Sep 16 05:04:07.104384 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:04:07.104724 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:04:07.107950 sudo[1875]: pam_unix(sudo:session): session closed for user root Sep 16 05:04:07.150569 sshd[1874]: Connection closed by 139.178.68.195 port 47930 Sep 16 05:04:07.151406 sshd-session[1871]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:07.157520 systemd[1]: sshd@7-10.128.0.3:22-139.178.68.195:47930.service: Deactivated successfully. Sep 16 05:04:07.159938 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 05:04:07.161666 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. Sep 16 05:04:07.163420 systemd-logind[1555]: Removed session 8. Sep 16 05:04:07.208145 systemd[1]: Started sshd@8-10.128.0.3:22-139.178.68.195:47938.service - OpenSSH per-connection server daemon (139.178.68.195:47938). Sep 16 05:04:07.515874 sshd[1907]: Accepted publickey for core from 139.178.68.195 port 47938 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:04:07.517796 sshd-session[1907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:04:07.525166 systemd-logind[1555]: New session 9 of user core. Sep 16 05:04:07.534446 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 05:04:07.694536 sudo[1911]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 05:04:07.695031 sudo[1911]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:04:08.185907 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 05:04:08.197803 (dockerd)[1928]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 05:04:08.559014 dockerd[1928]: time="2025-09-16T05:04:08.558604251Z" level=info msg="Starting up" Sep 16 05:04:08.561154 dockerd[1928]: time="2025-09-16T05:04:08.561074858Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 05:04:08.577786 dockerd[1928]: time="2025-09-16T05:04:08.577694131Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 05:04:08.629183 dockerd[1928]: time="2025-09-16T05:04:08.628430327Z" level=info msg="Loading containers: start." Sep 16 05:04:08.648384 kernel: Initializing XFRM netlink socket Sep 16 05:04:09.011366 systemd-networkd[1466]: docker0: Link UP Sep 16 05:04:09.018082 dockerd[1928]: time="2025-09-16T05:04:09.017986907Z" level=info msg="Loading containers: done." Sep 16 05:04:09.037046 dockerd[1928]: time="2025-09-16T05:04:09.036971766Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 05:04:09.037257 dockerd[1928]: time="2025-09-16T05:04:09.037112406Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 05:04:09.037257 dockerd[1928]: time="2025-09-16T05:04:09.037234189Z" level=info msg="Initializing buildkit" Sep 16 05:04:09.071374 dockerd[1928]: time="2025-09-16T05:04:09.071312051Z" level=info msg="Completed buildkit initialization" Sep 16 05:04:09.081032 dockerd[1928]: time="2025-09-16T05:04:09.080947969Z" level=info msg="Daemon has completed initialization" Sep 16 05:04:09.081215 dockerd[1928]: time="2025-09-16T05:04:09.081038823Z" level=info msg="API listen on /run/docker.sock" Sep 16 05:04:09.081567 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 05:04:10.070342 containerd[1578]: time="2025-09-16T05:04:10.070287435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 16 05:04:10.572963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391374264.mount: Deactivated successfully. Sep 16 05:04:12.282914 containerd[1578]: time="2025-09-16T05:04:12.282834460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:12.284655 containerd[1578]: time="2025-09-16T05:04:12.284359929Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30122476" Sep 16 05:04:12.286186 containerd[1578]: time="2025-09-16T05:04:12.286137654Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:12.289987 containerd[1578]: time="2025-09-16T05:04:12.289941156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:12.291469 containerd[1578]: time="2025-09-16T05:04:12.291238591Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.220895846s" Sep 16 05:04:12.291469 containerd[1578]: time="2025-09-16T05:04:12.291290946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 16 05:04:12.292171 containerd[1578]: time="2025-09-16T05:04:12.292138367Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 16 05:04:13.956577 containerd[1578]: time="2025-09-16T05:04:13.956494159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:13.958105 containerd[1578]: time="2025-09-16T05:04:13.958038532Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26022778" Sep 16 05:04:13.959698 containerd[1578]: time="2025-09-16T05:04:13.959627568Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:13.963897 containerd[1578]: time="2025-09-16T05:04:13.963823171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:13.965387 containerd[1578]: time="2025-09-16T05:04:13.965219026Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.672926482s" Sep 16 05:04:13.965387 containerd[1578]: time="2025-09-16T05:04:13.965267122Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 16 05:04:13.966410 containerd[1578]: time="2025-09-16T05:04:13.966127798Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 16 05:04:15.385660 containerd[1578]: time="2025-09-16T05:04:15.385566342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:15.387131 containerd[1578]: time="2025-09-16T05:04:15.387012804Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20157484" Sep 16 05:04:15.388608 containerd[1578]: time="2025-09-16T05:04:15.388537837Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:15.392049 containerd[1578]: time="2025-09-16T05:04:15.391982727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:15.393550 containerd[1578]: time="2025-09-16T05:04:15.393346305Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.427177349s" Sep 16 05:04:15.393550 containerd[1578]: time="2025-09-16T05:04:15.393389119Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 16 05:04:15.394400 containerd[1578]: time="2025-09-16T05:04:15.394153978Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 16 05:04:15.692837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 05:04:15.695472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:04:16.133763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:04:16.148587 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:04:16.247071 kubelet[2217]: E0916 05:04:16.247006 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:04:16.252966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:04:16.253377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:04:16.254158 systemd[1]: kubelet.service: Consumed 243ms CPU time, 108.5M memory peak. Sep 16 05:04:16.734279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount993567407.mount: Deactivated successfully. Sep 16 05:04:17.488731 containerd[1578]: time="2025-09-16T05:04:17.488642167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:17.490294 containerd[1578]: time="2025-09-16T05:04:17.489978527Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31931364" Sep 16 05:04:17.491603 containerd[1578]: time="2025-09-16T05:04:17.491559967Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:17.494354 containerd[1578]: time="2025-09-16T05:04:17.494312979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:17.495231 containerd[1578]: time="2025-09-16T05:04:17.495186419Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.100988594s" Sep 16 05:04:17.495327 containerd[1578]: time="2025-09-16T05:04:17.495244070Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 16 05:04:17.496121 containerd[1578]: time="2025-09-16T05:04:17.496042449Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 16 05:04:17.916483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3971615497.mount: Deactivated successfully. Sep 16 05:04:19.284653 containerd[1578]: time="2025-09-16T05:04:19.284573095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:19.286290 containerd[1578]: time="2025-09-16T05:04:19.286106897Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20948880" Sep 16 05:04:19.287553 containerd[1578]: time="2025-09-16T05:04:19.287507914Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:19.291303 containerd[1578]: time="2025-09-16T05:04:19.291255930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:19.293117 containerd[1578]: time="2025-09-16T05:04:19.292739755Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.796656629s" Sep 16 05:04:19.293117 containerd[1578]: time="2025-09-16T05:04:19.292788181Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 16 05:04:19.293489 containerd[1578]: time="2025-09-16T05:04:19.293444146Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 05:04:19.696122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292262914.mount: Deactivated successfully. Sep 16 05:04:19.701136 containerd[1578]: time="2025-09-16T05:04:19.701069417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:04:19.702244 containerd[1578]: time="2025-09-16T05:04:19.702113049Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 16 05:04:19.703426 containerd[1578]: time="2025-09-16T05:04:19.703384193Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:04:19.707554 containerd[1578]: time="2025-09-16T05:04:19.706390684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:04:19.707554 containerd[1578]: time="2025-09-16T05:04:19.707344922Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 413.851247ms" Sep 16 05:04:19.707554 containerd[1578]: time="2025-09-16T05:04:19.707389384Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 05:04:19.708424 containerd[1578]: time="2025-09-16T05:04:19.708378820Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 16 05:04:20.147393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181262265.mount: Deactivated successfully. Sep 16 05:04:22.013572 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 16 05:04:22.468041 containerd[1578]: time="2025-09-16T05:04:22.467869001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:22.472329 containerd[1578]: time="2025-09-16T05:04:22.472263480Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58384071" Sep 16 05:04:22.477884 containerd[1578]: time="2025-09-16T05:04:22.477245177Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:22.482841 containerd[1578]: time="2025-09-16T05:04:22.482778277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:22.484504 containerd[1578]: time="2025-09-16T05:04:22.484452099Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.776017742s" Sep 16 05:04:22.484634 containerd[1578]: time="2025-09-16T05:04:22.484509616Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 16 05:04:26.503795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 16 05:04:26.509415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:04:26.827316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:04:26.842109 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:04:26.913078 kubelet[2371]: E0916 05:04:26.912999 2371 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:04:26.917452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:04:26.917767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:04:26.919036 systemd[1]: kubelet.service: Consumed 261ms CPU time, 108M memory peak. Sep 16 05:04:27.141805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:04:27.142177 systemd[1]: kubelet.service: Consumed 261ms CPU time, 108M memory peak. Sep 16 05:04:27.147134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:04:27.192729 systemd[1]: Reload requested from client PID 2386 ('systemctl') (unit session-9.scope)... Sep 16 05:04:27.192752 systemd[1]: Reloading... Sep 16 05:04:27.367165 zram_generator::config[2430]: No configuration found. Sep 16 05:04:27.683877 systemd[1]: Reloading finished in 490 ms. Sep 16 05:04:27.754673 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 05:04:27.754818 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 05:04:27.755391 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:04:27.755496 systemd[1]: kubelet.service: Consumed 166ms CPU time, 98.3M memory peak. Sep 16 05:04:27.758747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:04:28.117582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:04:28.131853 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 05:04:28.194120 kubelet[2482]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:04:28.194120 kubelet[2482]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 05:04:28.194120 kubelet[2482]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:04:28.194120 kubelet[2482]: I0916 05:04:28.193489 2482 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 05:04:28.862811 kubelet[2482]: I0916 05:04:28.862743 2482 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 05:04:28.862811 kubelet[2482]: I0916 05:04:28.862780 2482 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 05:04:28.863184 kubelet[2482]: I0916 05:04:28.863150 2482 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 05:04:28.919331 kubelet[2482]: E0916 05:04:28.919265 2482 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.3:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 16 05:04:28.920621 kubelet[2482]: I0916 05:04:28.920418 2482 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:04:28.929818 kubelet[2482]: I0916 05:04:28.929787 2482 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 05:04:28.935047 kubelet[2482]: I0916 05:04:28.934992 2482 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 05:04:28.935455 kubelet[2482]: I0916 05:04:28.935401 2482 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 05:04:28.935682 kubelet[2482]: I0916 05:04:28.935443 2482 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 05:04:28.935682 kubelet[2482]: I0916 05:04:28.935677 2482 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 05:04:28.935930 kubelet[2482]: I0916 05:04:28.935695 2482 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 05:04:28.935930 kubelet[2482]: I0916 05:04:28.935865 2482 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:04:28.939836 kubelet[2482]: I0916 05:04:28.939724 2482 kubelet.go:480] "Attempting to sync node with API server" Sep 16 05:04:28.939836 kubelet[2482]: I0916 05:04:28.939757 2482 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 05:04:28.939836 kubelet[2482]: I0916 05:04:28.939790 2482 kubelet.go:386] "Adding apiserver pod source" Sep 16 05:04:28.939836 kubelet[2482]: I0916 05:04:28.939812 2482 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 05:04:28.960871 kubelet[2482]: I0916 05:04:28.960315 2482 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 05:04:28.960871 kubelet[2482]: E0916 05:04:28.960766 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 16 05:04:28.961723 kubelet[2482]: E0916 05:04:28.961679 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8&limit=500&resourceVersion=0\": dial tcp 10.128.0.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 16 05:04:28.961874 kubelet[2482]: I0916 05:04:28.961777 2482 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 05:04:28.963247 kubelet[2482]: W0916 05:04:28.963220 2482 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 05:04:28.984513 kubelet[2482]: I0916 05:04:28.984447 2482 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 05:04:28.984669 kubelet[2482]: I0916 05:04:28.984542 2482 server.go:1289] "Started kubelet" Sep 16 05:04:28.986172 kubelet[2482]: I0916 05:04:28.985889 2482 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 05:04:28.988109 kubelet[2482]: I0916 05:04:28.987581 2482 server.go:317] "Adding debug handlers to kubelet server" Sep 16 05:04:28.992126 kubelet[2482]: I0916 05:04:28.991069 2482 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 05:04:28.992126 kubelet[2482]: I0916 05:04:28.991850 2482 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 05:04:28.995153 kubelet[2482]: E0916 05:04:28.992114 2482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.3:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.3:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8.1865aad6c64995c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8,UID:ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8,},FirstTimestamp:2025-09-16 05:04:28.984481216 +0000 UTC m=+0.846701436,LastTimestamp:2025-09-16 05:04:28.984481216 +0000 UTC m=+0.846701436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8,}" Sep 16 05:04:28.996527 kubelet[2482]: I0916 05:04:28.996505 2482 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 05:04:28.996892 kubelet[2482]: I0916 05:04:28.996862 2482 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 05:04:29.003861 kubelet[2482]: E0916 05:04:29.003826 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" Sep 16 05:04:29.003959 kubelet[2482]: I0916 05:04:29.003872 2482 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 05:04:29.004584 kubelet[2482]: I0916 05:04:29.004553 2482 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 05:04:29.004685 kubelet[2482]: I0916 05:04:29.004636 2482 reconciler.go:26] "Reconciler: start to sync state" Sep 16 05:04:29.005697 kubelet[2482]: E0916 05:04:29.005657 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 16 05:04:29.005956 kubelet[2482]: I0916 05:04:29.005926 2482 factory.go:223] Registration of the systemd container factory successfully Sep 16 05:04:29.006957 kubelet[2482]: I0916 05:04:29.006043 2482 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 05:04:29.006957 kubelet[2482]: E0916 05:04:29.006727 2482 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 05:04:29.008871 kubelet[2482]: E0916 05:04:29.008828 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8?timeout=10s\": dial tcp 10.128.0.3:6443: connect: connection refused" interval="200ms" Sep 16 05:04:29.009715 kubelet[2482]: I0916 05:04:29.009689 2482 factory.go:223] Registration of the containerd container factory successfully Sep 16 05:04:29.036881 kubelet[2482]: I0916 05:04:29.036839 2482 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 05:04:29.039372 kubelet[2482]: I0916 05:04:29.039339 2482 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 05:04:29.039372 kubelet[2482]: I0916 05:04:29.039361 2482 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 05:04:29.039538 kubelet[2482]: I0916 05:04:29.039391 2482 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:04:29.041461 kubelet[2482]: I0916 05:04:29.041313 2482 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 05:04:29.041556 kubelet[2482]: I0916 05:04:29.041507 2482 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 05:04:29.041556 kubelet[2482]: I0916 05:04:29.041536 2482 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 05:04:29.041556 kubelet[2482]: I0916 05:04:29.041547 2482 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 05:04:29.041697 kubelet[2482]: E0916 05:04:29.041611 2482 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 05:04:29.043816 kubelet[2482]: I0916 05:04:29.043231 2482 policy_none.go:49] "None policy: Start" Sep 16 05:04:29.043816 kubelet[2482]: I0916 05:04:29.043271 2482 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 05:04:29.043816 kubelet[2482]: I0916 05:04:29.043292 2482 state_mem.go:35] "Initializing new in-memory state store" Sep 16 05:04:29.043816 kubelet[2482]: E0916 05:04:29.043756 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 16 05:04:29.056858 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 05:04:29.072475 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 05:04:29.078334 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 05:04:29.090581 kubelet[2482]: E0916 05:04:29.090547 2482 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 05:04:29.091558 kubelet[2482]: I0916 05:04:29.091292 2482 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 05:04:29.091558 kubelet[2482]: I0916 05:04:29.091320 2482 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 05:04:29.091819 kubelet[2482]: I0916 05:04:29.091801 2482 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 05:04:29.094697 kubelet[2482]: E0916 05:04:29.094675 2482 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 05:04:29.094944 kubelet[2482]: E0916 05:04:29.094898 2482 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" Sep 16 05:04:29.167882 systemd[1]: Created slice kubepods-burstable-pod2fcbf8e320415c6c1ee825d3aba5a440.slice - libcontainer container kubepods-burstable-pod2fcbf8e320415c6c1ee825d3aba5a440.slice. Sep 16 05:04:29.180310 kubelet[2482]: E0916 05:04:29.179909 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.195514 systemd[1]: Created slice kubepods-burstable-pod2c2f642b366fae4bbb17b7820606d394.slice - libcontainer container kubepods-burstable-pod2c2f642b366fae4bbb17b7820606d394.slice. Sep 16 05:04:29.197642 kubelet[2482]: I0916 05:04:29.197325 2482 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.198147 kubelet[2482]: E0916 05:04:29.197769 2482 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.3:6443/api/v1/nodes\": dial tcp 10.128.0.3:6443: connect: connection refused" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.200156 kubelet[2482]: E0916 05:04:29.200107 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.204947 kubelet[2482]: I0916 05:04:29.204913 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2fcbf8e320415c6c1ee825d3aba5a440-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2fcbf8e320415c6c1ee825d3aba5a440\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.205198 kubelet[2482]: I0916 05:04:29.205173 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fcbf8e320415c6c1ee825d3aba5a440-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2fcbf8e320415c6c1ee825d3aba5a440\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.205406 kubelet[2482]: I0916 05:04:29.205279 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11995f83ec52233aacf5ccb29a6d278b-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"11995f83ec52233aacf5ccb29a6d278b\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.205566 kubelet[2482]: I0916 05:04:29.205494 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcbf8e320415c6c1ee825d3aba5a440-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2fcbf8e320415c6c1ee825d3aba5a440\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.205671 kubelet[2482]: I0916 05:04:29.205544 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fcbf8e320415c6c1ee825d3aba5a440-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2fcbf8e320415c6c1ee825d3aba5a440\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.205841 kubelet[2482]: I0916 05:04:29.205770 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c2f642b366fae4bbb17b7820606d394-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2c2f642b366fae4bbb17b7820606d394\") " pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.205944 kubelet[2482]: I0916 05:04:29.205924 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11995f83ec52233aacf5ccb29a6d278b-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"11995f83ec52233aacf5ccb29a6d278b\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.206150 kubelet[2482]: I0916 05:04:29.206035 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11995f83ec52233aacf5ccb29a6d278b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"11995f83ec52233aacf5ccb29a6d278b\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.206028 systemd[1]: Created slice kubepods-burstable-pod11995f83ec52233aacf5ccb29a6d278b.slice - libcontainer container kubepods-burstable-pod11995f83ec52233aacf5ccb29a6d278b.slice. Sep 16 05:04:29.206434 kubelet[2482]: I0916 05:04:29.206334 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fcbf8e320415c6c1ee825d3aba5a440-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2fcbf8e320415c6c1ee825d3aba5a440\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.208817 kubelet[2482]: E0916 05:04:29.208787 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.209598 kubelet[2482]: E0916 05:04:29.209551 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8?timeout=10s\": dial tcp 10.128.0.3:6443: connect: connection refused" interval="400ms" Sep 16 05:04:29.405661 kubelet[2482]: I0916 05:04:29.405599 2482 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.406185 kubelet[2482]: E0916 05:04:29.406145 2482 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.3:6443/api/v1/nodes\": dial tcp 10.128.0.3:6443: connect: connection refused" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.481647 containerd[1578]: time="2025-09-16T05:04:29.481573160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8,Uid:2fcbf8e320415c6c1ee825d3aba5a440,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:29.503177 containerd[1578]: time="2025-09-16T05:04:29.503083842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8,Uid:2c2f642b366fae4bbb17b7820606d394,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:29.512935 containerd[1578]: time="2025-09-16T05:04:29.511994789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8,Uid:11995f83ec52233aacf5ccb29a6d278b,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:29.517488 containerd[1578]: time="2025-09-16T05:04:29.517389942Z" level=info msg="connecting to shim 7e601dfdf7c70530f2140201f5a803b259d52bffd960663f3d9164c2a8ff5eb8" address="unix:///run/containerd/s/c61e4bc567f65b7143d54ce409c051438003f4c357fa49cedd60d2dd0f0ed5ae" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:29.589550 systemd[1]: Started cri-containerd-7e601dfdf7c70530f2140201f5a803b259d52bffd960663f3d9164c2a8ff5eb8.scope - libcontainer container 7e601dfdf7c70530f2140201f5a803b259d52bffd960663f3d9164c2a8ff5eb8. Sep 16 05:04:29.592653 containerd[1578]: time="2025-09-16T05:04:29.592568437Z" level=info msg="connecting to shim 94f7edef19be6037b15816794ae974c502523ea5efe970c233c6cce19ab78d43" address="unix:///run/containerd/s/c47570de01993c9b441a4ccff63e18af2537393ba1daa5f4b262cb70350325f6" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:29.596308 containerd[1578]: time="2025-09-16T05:04:29.596261042Z" level=info msg="connecting to shim e30712c93167aaff10ff40e2b2bd3634de0031cbcde63a87890b66aad0a2d07f" address="unix:///run/containerd/s/34302d65571202e4bcd9bc5972d037f9734f3638e29bd2d59e42ff83da34cdcc" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:29.612311 kubelet[2482]: E0916 05:04:29.610474 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8?timeout=10s\": dial tcp 10.128.0.3:6443: connect: connection refused" interval="800ms" Sep 16 05:04:29.656401 systemd[1]: Started cri-containerd-e30712c93167aaff10ff40e2b2bd3634de0031cbcde63a87890b66aad0a2d07f.scope - libcontainer container e30712c93167aaff10ff40e2b2bd3634de0031cbcde63a87890b66aad0a2d07f. Sep 16 05:04:29.671558 systemd[1]: Started cri-containerd-94f7edef19be6037b15816794ae974c502523ea5efe970c233c6cce19ab78d43.scope - libcontainer container 94f7edef19be6037b15816794ae974c502523ea5efe970c233c6cce19ab78d43. Sep 16 05:04:29.767723 containerd[1578]: time="2025-09-16T05:04:29.767560303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8,Uid:2fcbf8e320415c6c1ee825d3aba5a440,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e601dfdf7c70530f2140201f5a803b259d52bffd960663f3d9164c2a8ff5eb8\"" Sep 16 05:04:29.772689 kubelet[2482]: E0916 05:04:29.772615 2482 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42c" Sep 16 05:04:29.779998 containerd[1578]: time="2025-09-16T05:04:29.779931854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8,Uid:11995f83ec52233aacf5ccb29a6d278b,Namespace:kube-system,Attempt:0,} returns sandbox id \"94f7edef19be6037b15816794ae974c502523ea5efe970c233c6cce19ab78d43\"" Sep 16 05:04:29.780511 containerd[1578]: time="2025-09-16T05:04:29.780486902Z" level=info msg="CreateContainer within sandbox \"7e601dfdf7c70530f2140201f5a803b259d52bffd960663f3d9164c2a8ff5eb8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 05:04:29.785866 kubelet[2482]: E0916 05:04:29.785730 2482 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8" Sep 16 05:04:29.792272 containerd[1578]: time="2025-09-16T05:04:29.792223878Z" level=info msg="CreateContainer within sandbox \"94f7edef19be6037b15816794ae974c502523ea5efe970c233c6cce19ab78d43\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 05:04:29.792942 containerd[1578]: time="2025-09-16T05:04:29.792898434Z" level=info msg="Container 5e209c0be32073de8462c76db107376099b2f5e9eb7c7673ea9ef4464b2a4f4b: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:29.817222 containerd[1578]: time="2025-09-16T05:04:29.817169514Z" level=info msg="Container 50fa99d836e3213d015f02ec41f5b66834a0bb1c9b8d84502336b791c662e604: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:29.818386 containerd[1578]: time="2025-09-16T05:04:29.818314319Z" level=info msg="CreateContainer within sandbox \"7e601dfdf7c70530f2140201f5a803b259d52bffd960663f3d9164c2a8ff5eb8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5e209c0be32073de8462c76db107376099b2f5e9eb7c7673ea9ef4464b2a4f4b\"" Sep 16 05:04:29.820024 containerd[1578]: time="2025-09-16T05:04:29.819950253Z" level=info msg="StartContainer for \"5e209c0be32073de8462c76db107376099b2f5e9eb7c7673ea9ef4464b2a4f4b\"" Sep 16 05:04:29.822677 kubelet[2482]: I0916 05:04:29.822580 2482 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.823073 kubelet[2482]: E0916 05:04:29.823021 2482 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.3:6443/api/v1/nodes\": dial tcp 10.128.0.3:6443: connect: connection refused" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:29.823900 containerd[1578]: time="2025-09-16T05:04:29.823418569Z" level=info msg="connecting to shim 5e209c0be32073de8462c76db107376099b2f5e9eb7c7673ea9ef4464b2a4f4b" address="unix:///run/containerd/s/c61e4bc567f65b7143d54ce409c051438003f4c357fa49cedd60d2dd0f0ed5ae" protocol=ttrpc version=3 Sep 16 05:04:29.833409 containerd[1578]: time="2025-09-16T05:04:29.833191151Z" level=info msg="CreateContainer within sandbox \"94f7edef19be6037b15816794ae974c502523ea5efe970c233c6cce19ab78d43\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"50fa99d836e3213d015f02ec41f5b66834a0bb1c9b8d84502336b791c662e604\"" Sep 16 05:04:29.835443 containerd[1578]: time="2025-09-16T05:04:29.835403006Z" level=info msg="StartContainer for \"50fa99d836e3213d015f02ec41f5b66834a0bb1c9b8d84502336b791c662e604\"" Sep 16 05:04:29.835870 containerd[1578]: time="2025-09-16T05:04:29.835838007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8,Uid:2c2f642b366fae4bbb17b7820606d394,Namespace:kube-system,Attempt:0,} returns sandbox id \"e30712c93167aaff10ff40e2b2bd3634de0031cbcde63a87890b66aad0a2d07f\"" Sep 16 05:04:29.838405 kubelet[2482]: E0916 05:04:29.838357 2482 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8" Sep 16 05:04:29.838840 containerd[1578]: time="2025-09-16T05:04:29.838761177Z" level=info msg="connecting to shim 50fa99d836e3213d015f02ec41f5b66834a0bb1c9b8d84502336b791c662e604" address="unix:///run/containerd/s/c47570de01993c9b441a4ccff63e18af2537393ba1daa5f4b262cb70350325f6" protocol=ttrpc version=3 Sep 16 05:04:29.843793 containerd[1578]: time="2025-09-16T05:04:29.843746902Z" level=info msg="CreateContainer within sandbox \"e30712c93167aaff10ff40e2b2bd3634de0031cbcde63a87890b66aad0a2d07f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 05:04:29.868586 systemd[1]: Started cri-containerd-5e209c0be32073de8462c76db107376099b2f5e9eb7c7673ea9ef4464b2a4f4b.scope - libcontainer container 5e209c0be32073de8462c76db107376099b2f5e9eb7c7673ea9ef4464b2a4f4b. Sep 16 05:04:29.878399 containerd[1578]: time="2025-09-16T05:04:29.878101358Z" level=info msg="Container 451b7e7c318b652e8f1d8723f219e8638203dc05412a732b6780ed217e476920: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:29.884352 systemd[1]: Started cri-containerd-50fa99d836e3213d015f02ec41f5b66834a0bb1c9b8d84502336b791c662e604.scope - libcontainer container 50fa99d836e3213d015f02ec41f5b66834a0bb1c9b8d84502336b791c662e604. Sep 16 05:04:29.906728 containerd[1578]: time="2025-09-16T05:04:29.906233507Z" level=info msg="CreateContainer within sandbox \"e30712c93167aaff10ff40e2b2bd3634de0031cbcde63a87890b66aad0a2d07f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"451b7e7c318b652e8f1d8723f219e8638203dc05412a732b6780ed217e476920\"" Sep 16 05:04:29.908793 containerd[1578]: time="2025-09-16T05:04:29.908689427Z" level=info msg="StartContainer for \"451b7e7c318b652e8f1d8723f219e8638203dc05412a732b6780ed217e476920\"" Sep 16 05:04:29.914372 containerd[1578]: time="2025-09-16T05:04:29.914310798Z" level=info msg="connecting to shim 451b7e7c318b652e8f1d8723f219e8638203dc05412a732b6780ed217e476920" address="unix:///run/containerd/s/34302d65571202e4bcd9bc5972d037f9734f3638e29bd2d59e42ff83da34cdcc" protocol=ttrpc version=3 Sep 16 05:04:29.955331 systemd[1]: Started cri-containerd-451b7e7c318b652e8f1d8723f219e8638203dc05412a732b6780ed217e476920.scope - libcontainer container 451b7e7c318b652e8f1d8723f219e8638203dc05412a732b6780ed217e476920. Sep 16 05:04:30.033040 containerd[1578]: time="2025-09-16T05:04:30.031176369Z" level=info msg="StartContainer for \"50fa99d836e3213d015f02ec41f5b66834a0bb1c9b8d84502336b791c662e604\" returns successfully" Sep 16 05:04:30.070164 containerd[1578]: time="2025-09-16T05:04:30.070116427Z" level=info msg="StartContainer for \"5e209c0be32073de8462c76db107376099b2f5e9eb7c7673ea9ef4464b2a4f4b\" returns successfully" Sep 16 05:04:30.072653 kubelet[2482]: E0916 05:04:30.072619 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:30.085036 kubelet[2482]: E0916 05:04:30.084716 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8&limit=500&resourceVersion=0\": dial tcp 10.128.0.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 16 05:04:30.110533 containerd[1578]: time="2025-09-16T05:04:30.110452709Z" level=info msg="StartContainer for \"451b7e7c318b652e8f1d8723f219e8638203dc05412a732b6780ed217e476920\" returns successfully" Sep 16 05:04:30.159532 kubelet[2482]: E0916 05:04:30.159443 2482 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 16 05:04:30.627820 kubelet[2482]: I0916 05:04:30.627737 2482 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:31.077489 kubelet[2482]: E0916 05:04:31.077266 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:31.078492 kubelet[2482]: E0916 05:04:31.078103 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:31.080108 kubelet[2482]: E0916 05:04:31.078994 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:32.083118 kubelet[2482]: E0916 05:04:32.082808 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:32.084292 kubelet[2482]: E0916 05:04:32.084263 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.088778 kubelet[2482]: E0916 05:04:33.088731 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.089657 kubelet[2482]: E0916 05:04:33.089628 2482 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.353237 kubelet[2482]: E0916 05:04:33.352989 2482 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.431104 kubelet[2482]: I0916 05:04:33.431034 2482 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.431104 kubelet[2482]: E0916 05:04:33.431110 2482 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\": node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" Sep 16 05:04:33.441132 kubelet[2482]: I0916 05:04:33.440665 2482 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.473648 kubelet[2482]: E0916 05:04:33.473323 2482 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.507992 kubelet[2482]: I0916 05:04:33.507937 2482 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.514486 kubelet[2482]: E0916 05:04:33.514404 2482 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.514486 kubelet[2482]: I0916 05:04:33.514449 2482 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.521119 kubelet[2482]: E0916 05:04:33.520846 2482 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.521119 kubelet[2482]: I0916 05:04:33.520889 2482 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.525724 kubelet[2482]: E0916 05:04:33.525671 2482 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:33.963575 kubelet[2482]: I0916 05:04:33.963264 2482 apiserver.go:52] "Watching apiserver" Sep 16 05:04:34.005339 kubelet[2482]: I0916 05:04:34.005292 2482 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 05:04:35.630228 systemd[1]: Reload requested from client PID 2766 ('systemctl') (unit session-9.scope)... Sep 16 05:04:35.630251 systemd[1]: Reloading... Sep 16 05:04:35.757133 zram_generator::config[2806]: No configuration found. Sep 16 05:04:35.952126 update_engine[1557]: I20250916 05:04:35.951279 1557 update_attempter.cc:509] Updating boot flags... Sep 16 05:04:36.259497 systemd[1]: Reloading finished in 628 ms. Sep 16 05:04:36.427367 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:04:36.467898 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 05:04:36.468530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:04:36.468610 systemd[1]: kubelet.service: Consumed 1.403s CPU time, 130.4M memory peak. Sep 16 05:04:36.474507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:04:36.901555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:04:36.914817 (kubelet)[2882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 05:04:36.992671 kubelet[2882]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:04:36.992671 kubelet[2882]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 05:04:36.992671 kubelet[2882]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:04:36.992671 kubelet[2882]: I0916 05:04:36.991078 2882 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 05:04:37.012444 kubelet[2882]: I0916 05:04:37.012380 2882 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 05:04:37.012444 kubelet[2882]: I0916 05:04:37.012423 2882 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 05:04:37.012908 kubelet[2882]: I0916 05:04:37.012874 2882 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 05:04:37.015251 kubelet[2882]: I0916 05:04:37.015179 2882 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 16 05:04:37.019123 kubelet[2882]: I0916 05:04:37.019006 2882 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:04:37.028071 kubelet[2882]: I0916 05:04:37.028007 2882 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 05:04:37.033692 kubelet[2882]: I0916 05:04:37.033542 2882 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 05:04:37.034536 kubelet[2882]: I0916 05:04:37.034150 2882 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 05:04:37.034536 kubelet[2882]: I0916 05:04:37.034189 2882 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 05:04:37.034536 kubelet[2882]: I0916 05:04:37.034447 2882 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 05:04:37.034536 kubelet[2882]: I0916 05:04:37.034464 2882 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 05:04:37.035006 kubelet[2882]: I0916 05:04:37.034986 2882 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:04:37.036139 kubelet[2882]: I0916 05:04:37.035351 2882 kubelet.go:480] "Attempting to sync node with API server" Sep 16 05:04:37.036306 kubelet[2882]: I0916 05:04:37.036289 2882 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 05:04:37.036437 kubelet[2882]: I0916 05:04:37.036424 2882 kubelet.go:386] "Adding apiserver pod source" Sep 16 05:04:37.036531 kubelet[2882]: I0916 05:04:37.036518 2882 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 05:04:37.044929 sudo[2896]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 16 05:04:37.045514 sudo[2896]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 16 05:04:37.056318 kubelet[2882]: I0916 05:04:37.056268 2882 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 05:04:37.057071 kubelet[2882]: I0916 05:04:37.057024 2882 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 05:04:37.082247 kubelet[2882]: I0916 05:04:37.082047 2882 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 05:04:37.082247 kubelet[2882]: I0916 05:04:37.082125 2882 server.go:1289] "Started kubelet" Sep 16 05:04:37.084622 kubelet[2882]: I0916 05:04:37.084553 2882 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 05:04:37.094687 kubelet[2882]: I0916 05:04:37.094451 2882 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 05:04:37.098141 kubelet[2882]: I0916 05:04:37.096608 2882 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 05:04:37.098141 kubelet[2882]: I0916 05:04:37.097979 2882 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 05:04:37.104679 kubelet[2882]: I0916 05:04:37.104644 2882 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 05:04:37.105356 kubelet[2882]: I0916 05:04:37.105334 2882 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 05:04:37.105865 kubelet[2882]: E0916 05:04:37.105838 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" not found" Sep 16 05:04:37.107380 kubelet[2882]: I0916 05:04:37.107314 2882 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 05:04:37.107744 kubelet[2882]: I0916 05:04:37.107656 2882 reconciler.go:26] "Reconciler: start to sync state" Sep 16 05:04:37.107858 kubelet[2882]: I0916 05:04:37.107751 2882 server.go:317] "Adding debug handlers to kubelet server" Sep 16 05:04:37.126116 kubelet[2882]: I0916 05:04:37.124377 2882 factory.go:223] Registration of the systemd container factory successfully Sep 16 05:04:37.126116 kubelet[2882]: I0916 05:04:37.124559 2882 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 05:04:37.144233 kubelet[2882]: I0916 05:04:37.143304 2882 factory.go:223] Registration of the containerd container factory successfully Sep 16 05:04:37.204597 kubelet[2882]: E0916 05:04:37.203849 2882 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 05:04:37.210259 kubelet[2882]: I0916 05:04:37.210199 2882 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 05:04:37.221927 kubelet[2882]: I0916 05:04:37.221744 2882 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 05:04:37.221927 kubelet[2882]: I0916 05:04:37.221784 2882 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 05:04:37.221927 kubelet[2882]: I0916 05:04:37.221811 2882 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 05:04:37.221927 kubelet[2882]: I0916 05:04:37.221823 2882 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 05:04:37.221927 kubelet[2882]: E0916 05:04:37.221892 2882 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 05:04:37.322942 kubelet[2882]: E0916 05:04:37.322885 2882 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 16 05:04:37.338666 kubelet[2882]: I0916 05:04:37.338627 2882 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 05:04:37.338666 kubelet[2882]: I0916 05:04:37.338657 2882 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 05:04:37.338854 kubelet[2882]: I0916 05:04:37.338685 2882 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:04:37.338919 kubelet[2882]: I0916 05:04:37.338880 2882 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 05:04:37.338919 kubelet[2882]: I0916 05:04:37.338896 2882 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 05:04:37.339044 kubelet[2882]: I0916 05:04:37.338924 2882 policy_none.go:49] "None policy: Start" Sep 16 05:04:37.339044 kubelet[2882]: I0916 05:04:37.338939 2882 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 05:04:37.339044 kubelet[2882]: I0916 05:04:37.338956 2882 state_mem.go:35] "Initializing new in-memory state store" Sep 16 05:04:37.340143 kubelet[2882]: I0916 05:04:37.339392 2882 state_mem.go:75] "Updated machine memory state" Sep 16 05:04:37.348834 kubelet[2882]: E0916 05:04:37.348794 2882 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 05:04:37.349055 kubelet[2882]: I0916 05:04:37.349032 2882 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 05:04:37.351424 kubelet[2882]: I0916 05:04:37.349059 2882 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 05:04:37.353275 kubelet[2882]: I0916 05:04:37.351934 2882 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 05:04:37.361502 kubelet[2882]: E0916 05:04:37.361431 2882 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 05:04:37.484231 kubelet[2882]: I0916 05:04:37.482160 2882 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.498428 kubelet[2882]: I0916 05:04:37.498364 2882 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.498609 kubelet[2882]: I0916 05:04:37.498519 2882 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.525133 kubelet[2882]: I0916 05:04:37.524485 2882 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.525133 kubelet[2882]: I0916 05:04:37.524608 2882 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.525133 kubelet[2882]: I0916 05:04:37.524485 2882 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.535132 kubelet[2882]: I0916 05:04:37.535065 2882 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 16 05:04:37.537503 kubelet[2882]: I0916 05:04:37.537329 2882 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 16 05:04:37.539768 kubelet[2882]: I0916 05:04:37.539689 2882 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Sep 16 05:04:37.609661 kubelet[2882]: I0916 05:04:37.609589 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11995f83ec52233aacf5ccb29a6d278b-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"11995f83ec52233aacf5ccb29a6d278b\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.610260 kubelet[2882]: I0916 05:04:37.609952 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2fcbf8e320415c6c1ee825d3aba5a440-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2fcbf8e320415c6c1ee825d3aba5a440\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.610582 kubelet[2882]: I0916 05:04:37.610346 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11995f83ec52233aacf5ccb29a6d278b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"11995f83ec52233aacf5ccb29a6d278b\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.610874 kubelet[2882]: I0916 05:04:37.610781 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fcbf8e320415c6c1ee825d3aba5a440-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2fcbf8e320415c6c1ee825d3aba5a440\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.611015 kubelet[2882]: I0916 05:04:37.610858 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fcbf8e320415c6c1ee825d3aba5a440-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2fcbf8e320415c6c1ee825d3aba5a440\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.611209 kubelet[2882]: I0916 05:04:37.611189 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcbf8e320415c6c1ee825d3aba5a440-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2fcbf8e320415c6c1ee825d3aba5a440\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.611498 kubelet[2882]: I0916 05:04:37.611366 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fcbf8e320415c6c1ee825d3aba5a440-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2fcbf8e320415c6c1ee825d3aba5a440\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.611498 kubelet[2882]: I0916 05:04:37.611414 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c2f642b366fae4bbb17b7820606d394-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"2c2f642b366fae4bbb17b7820606d394\") " pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.611498 kubelet[2882]: I0916 05:04:37.611446 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11995f83ec52233aacf5ccb29a6d278b-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" (UID: \"11995f83ec52233aacf5ccb29a6d278b\") " pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" Sep 16 05:04:37.765000 sudo[2896]: pam_unix(sudo:session): session closed for user root Sep 16 05:04:38.040176 kubelet[2882]: I0916 05:04:38.039995 2882 apiserver.go:52] "Watching apiserver" Sep 16 05:04:38.108199 kubelet[2882]: I0916 05:04:38.108136 2882 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 05:04:38.219646 kubelet[2882]: I0916 05:04:38.219522 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" podStartSLOduration=1.219476809 podStartE2EDuration="1.219476809s" podCreationTimestamp="2025-09-16 05:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:04:38.204834159 +0000 UTC m=+1.282387310" watchObservedRunningTime="2025-09-16 05:04:38.219476809 +0000 UTC m=+1.297029936" Sep 16 05:04:38.238118 kubelet[2882]: I0916 05:04:38.237631 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" podStartSLOduration=1.237575929 podStartE2EDuration="1.237575929s" podCreationTimestamp="2025-09-16 05:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:04:38.22011885 +0000 UTC m=+1.297671998" watchObservedRunningTime="2025-09-16 05:04:38.237575929 +0000 UTC m=+1.315129066" Sep 16 05:04:38.273118 kubelet[2882]: I0916 05:04:38.272657 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" podStartSLOduration=1.2726330319999999 podStartE2EDuration="1.272633032s" podCreationTimestamp="2025-09-16 05:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:04:38.238101749 +0000 UTC m=+1.315654900" watchObservedRunningTime="2025-09-16 05:04:38.272633032 +0000 UTC m=+1.350186190" Sep 16 05:04:39.890218 sudo[1911]: pam_unix(sudo:session): session closed for user root Sep 16 05:04:39.932620 sshd[1910]: Connection closed by 139.178.68.195 port 47938 Sep 16 05:04:39.933540 sshd-session[1907]: pam_unix(sshd:session): session closed for user core Sep 16 05:04:39.939799 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. Sep 16 05:04:39.940520 systemd[1]: sshd@8-10.128.0.3:22-139.178.68.195:47938.service: Deactivated successfully. Sep 16 05:04:39.945028 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 05:04:39.945567 systemd[1]: session-9.scope: Consumed 7.557s CPU time, 275.7M memory peak. Sep 16 05:04:39.950418 systemd-logind[1555]: Removed session 9. Sep 16 05:04:41.534862 kubelet[2882]: I0916 05:04:41.534822 2882 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 05:04:41.536128 containerd[1578]: time="2025-09-16T05:04:41.536062337Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 05:04:41.536722 kubelet[2882]: I0916 05:04:41.536358 2882 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 05:04:42.297850 systemd[1]: Created slice kubepods-besteffort-pode035e26f_0e74_4088_a597_65aaead35301.slice - libcontainer container kubepods-besteffort-pode035e26f_0e74_4088_a597_65aaead35301.slice. Sep 16 05:04:42.331132 systemd[1]: Created slice kubepods-burstable-podbcad221e_22fb_49de_9b2c_cfa0d1cc09c3.slice - libcontainer container kubepods-burstable-podbcad221e_22fb_49de_9b2c_cfa0d1cc09c3.slice. Sep 16 05:04:42.342292 kubelet[2882]: I0916 05:04:42.342166 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9tr6\" (UniqueName: \"kubernetes.io/projected/e035e26f-0e74-4088-a597-65aaead35301-kube-api-access-b9tr6\") pod \"kube-proxy-v59nk\" (UID: \"e035e26f-0e74-4088-a597-65aaead35301\") " pod="kube-system/kube-proxy-v59nk" Sep 16 05:04:42.342601 kubelet[2882]: I0916 05:04:42.342249 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cni-path\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.342814 kubelet[2882]: I0916 05:04:42.342427 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-etc-cni-netd\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.343158 kubelet[2882]: I0916 05:04:42.343020 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-host-proc-sys-kernel\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.343158 kubelet[2882]: I0916 05:04:42.343129 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e035e26f-0e74-4088-a597-65aaead35301-lib-modules\") pod \"kube-proxy-v59nk\" (UID: \"e035e26f-0e74-4088-a597-65aaead35301\") " pod="kube-system/kube-proxy-v59nk" Sep 16 05:04:42.343583 kubelet[2882]: I0916 05:04:42.343462 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-bpf-maps\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.343583 kubelet[2882]: I0916 05:04:42.343522 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-lib-modules\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.343909 kubelet[2882]: I0916 05:04:42.343850 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-xtables-lock\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.344191 kubelet[2882]: I0916 05:04:42.344043 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-config-path\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.344191 kubelet[2882]: I0916 05:04:42.344136 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e035e26f-0e74-4088-a597-65aaead35301-xtables-lock\") pod \"kube-proxy-v59nk\" (UID: \"e035e26f-0e74-4088-a597-65aaead35301\") " pod="kube-system/kube-proxy-v59nk" Sep 16 05:04:42.344191 kubelet[2882]: I0916 05:04:42.344168 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-run\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.344556 kubelet[2882]: I0916 05:04:42.344431 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-clustermesh-secrets\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.344556 kubelet[2882]: I0916 05:04:42.344492 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-host-proc-sys-net\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.344556 kubelet[2882]: I0916 05:04:42.344518 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-hubble-tls\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.344813 kubelet[2882]: I0916 05:04:42.344748 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpfdd\" (UniqueName: \"kubernetes.io/projected/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-kube-api-access-mpfdd\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.344987 kubelet[2882]: I0916 05:04:42.344794 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e035e26f-0e74-4088-a597-65aaead35301-kube-proxy\") pod \"kube-proxy-v59nk\" (UID: \"e035e26f-0e74-4088-a597-65aaead35301\") " pod="kube-system/kube-proxy-v59nk" Sep 16 05:04:42.344987 kubelet[2882]: I0916 05:04:42.344940 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-hostproc\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.345207 kubelet[2882]: I0916 05:04:42.344967 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-cgroup\") pod \"cilium-j95ld\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " pod="kube-system/cilium-j95ld" Sep 16 05:04:42.515671 kubelet[2882]: E0916 05:04:42.515629 2882 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 16 05:04:42.515671 kubelet[2882]: E0916 05:04:42.515674 2882 projected.go:194] Error preparing data for projected volume kube-api-access-b9tr6 for pod kube-system/kube-proxy-v59nk: configmap "kube-root-ca.crt" not found Sep 16 05:04:42.515930 kubelet[2882]: E0916 05:04:42.515767 2882 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e035e26f-0e74-4088-a597-65aaead35301-kube-api-access-b9tr6 podName:e035e26f-0e74-4088-a597-65aaead35301 nodeName:}" failed. No retries permitted until 2025-09-16 05:04:43.015737429 +0000 UTC m=+6.093290573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b9tr6" (UniqueName: "kubernetes.io/projected/e035e26f-0e74-4088-a597-65aaead35301-kube-api-access-b9tr6") pod "kube-proxy-v59nk" (UID: "e035e26f-0e74-4088-a597-65aaead35301") : configmap "kube-root-ca.crt" not found Sep 16 05:04:42.515930 kubelet[2882]: E0916 05:04:42.515854 2882 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 16 05:04:42.515930 kubelet[2882]: E0916 05:04:42.515870 2882 projected.go:194] Error preparing data for projected volume kube-api-access-mpfdd for pod kube-system/cilium-j95ld: configmap "kube-root-ca.crt" not found Sep 16 05:04:42.515930 kubelet[2882]: E0916 05:04:42.515911 2882 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-kube-api-access-mpfdd podName:bcad221e-22fb-49de-9b2c-cfa0d1cc09c3 nodeName:}" failed. No retries permitted until 2025-09-16 05:04:43.015897266 +0000 UTC m=+6.093450389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mpfdd" (UniqueName: "kubernetes.io/projected/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-kube-api-access-mpfdd") pod "cilium-j95ld" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3") : configmap "kube-root-ca.crt" not found Sep 16 05:04:42.773229 kubelet[2882]: I0916 05:04:42.771958 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4233acd-877b-4699-b5e3-dfcc2c6cd533-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4k9ht\" (UID: \"f4233acd-877b-4699-b5e3-dfcc2c6cd533\") " pod="kube-system/cilium-operator-6c4d7847fc-4k9ht" Sep 16 05:04:42.773426 systemd[1]: Created slice kubepods-besteffort-podf4233acd_877b_4699_b5e3_dfcc2c6cd533.slice - libcontainer container kubepods-besteffort-podf4233acd_877b_4699_b5e3_dfcc2c6cd533.slice. Sep 16 05:04:42.775318 kubelet[2882]: I0916 05:04:42.775257 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk967\" (UniqueName: \"kubernetes.io/projected/f4233acd-877b-4699-b5e3-dfcc2c6cd533-kube-api-access-bk967\") pod \"cilium-operator-6c4d7847fc-4k9ht\" (UID: \"f4233acd-877b-4699-b5e3-dfcc2c6cd533\") " pod="kube-system/cilium-operator-6c4d7847fc-4k9ht" Sep 16 05:04:43.084573 containerd[1578]: time="2025-09-16T05:04:43.084442193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4k9ht,Uid:f4233acd-877b-4699-b5e3-dfcc2c6cd533,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:43.126124 containerd[1578]: time="2025-09-16T05:04:43.125929715Z" level=info msg="connecting to shim e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7" address="unix:///run/containerd/s/2d5163f4109d28c38c83fb8d8d4c2a2021506e92d7a8982a0ba38c8f6e08df5e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:43.161384 systemd[1]: Started cri-containerd-e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7.scope - libcontainer container e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7. Sep 16 05:04:43.212399 containerd[1578]: time="2025-09-16T05:04:43.211276350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v59nk,Uid:e035e26f-0e74-4088-a597-65aaead35301,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:43.237229 containerd[1578]: time="2025-09-16T05:04:43.237147877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4k9ht,Uid:f4233acd-877b-4699-b5e3-dfcc2c6cd533,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\"" Sep 16 05:04:43.238999 containerd[1578]: time="2025-09-16T05:04:43.238960895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j95ld,Uid:bcad221e-22fb-49de-9b2c-cfa0d1cc09c3,Namespace:kube-system,Attempt:0,}" Sep 16 05:04:43.242244 containerd[1578]: time="2025-09-16T05:04:43.242033910Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 16 05:04:43.249300 containerd[1578]: time="2025-09-16T05:04:43.249211948Z" level=info msg="connecting to shim a522bcc4ffce3677a59b99a96b5c39c795be975ff0e20998642225f6718dea1a" address="unix:///run/containerd/s/b8afadb3a73879ba1c1ba51ceedb9aab4980f62b6c3f32ef0312e004738adfa9" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:43.282680 containerd[1578]: time="2025-09-16T05:04:43.282624240Z" level=info msg="connecting to shim 28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127" address="unix:///run/containerd/s/14aa4eaff69a4691c15b582f8b40da8045603763a48ca6feed3c49df6d5baf18" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:04:43.292899 systemd[1]: Started cri-containerd-a522bcc4ffce3677a59b99a96b5c39c795be975ff0e20998642225f6718dea1a.scope - libcontainer container a522bcc4ffce3677a59b99a96b5c39c795be975ff0e20998642225f6718dea1a. Sep 16 05:04:43.348524 systemd[1]: Started cri-containerd-28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127.scope - libcontainer container 28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127. Sep 16 05:04:43.399030 containerd[1578]: time="2025-09-16T05:04:43.398869891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v59nk,Uid:e035e26f-0e74-4088-a597-65aaead35301,Namespace:kube-system,Attempt:0,} returns sandbox id \"a522bcc4ffce3677a59b99a96b5c39c795be975ff0e20998642225f6718dea1a\"" Sep 16 05:04:43.410791 containerd[1578]: time="2025-09-16T05:04:43.410726471Z" level=info msg="CreateContainer within sandbox \"a522bcc4ffce3677a59b99a96b5c39c795be975ff0e20998642225f6718dea1a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 05:04:43.416053 containerd[1578]: time="2025-09-16T05:04:43.415977218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j95ld,Uid:bcad221e-22fb-49de-9b2c-cfa0d1cc09c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\"" Sep 16 05:04:43.427117 containerd[1578]: time="2025-09-16T05:04:43.426536497Z" level=info msg="Container 4aaed92511f527e3a2c2af521d1fa048d0be85b6314f372a0ce477fddd3d199c: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:43.436998 containerd[1578]: time="2025-09-16T05:04:43.436949536Z" level=info msg="CreateContainer within sandbox \"a522bcc4ffce3677a59b99a96b5c39c795be975ff0e20998642225f6718dea1a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4aaed92511f527e3a2c2af521d1fa048d0be85b6314f372a0ce477fddd3d199c\"" Sep 16 05:04:43.438687 containerd[1578]: time="2025-09-16T05:04:43.438635898Z" level=info msg="StartContainer for \"4aaed92511f527e3a2c2af521d1fa048d0be85b6314f372a0ce477fddd3d199c\"" Sep 16 05:04:43.441942 containerd[1578]: time="2025-09-16T05:04:43.441891950Z" level=info msg="connecting to shim 4aaed92511f527e3a2c2af521d1fa048d0be85b6314f372a0ce477fddd3d199c" address="unix:///run/containerd/s/b8afadb3a73879ba1c1ba51ceedb9aab4980f62b6c3f32ef0312e004738adfa9" protocol=ttrpc version=3 Sep 16 05:04:43.495730 systemd[1]: Started cri-containerd-4aaed92511f527e3a2c2af521d1fa048d0be85b6314f372a0ce477fddd3d199c.scope - libcontainer container 4aaed92511f527e3a2c2af521d1fa048d0be85b6314f372a0ce477fddd3d199c. Sep 16 05:04:43.563739 containerd[1578]: time="2025-09-16T05:04:43.563583677Z" level=info msg="StartContainer for \"4aaed92511f527e3a2c2af521d1fa048d0be85b6314f372a0ce477fddd3d199c\" returns successfully" Sep 16 05:04:44.280708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913794782.mount: Deactivated successfully. Sep 16 05:04:44.342967 kubelet[2882]: I0916 05:04:44.342815 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v59nk" podStartSLOduration=2.342788839 podStartE2EDuration="2.342788839s" podCreationTimestamp="2025-09-16 05:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:04:44.331078472 +0000 UTC m=+7.408631624" watchObservedRunningTime="2025-09-16 05:04:44.342788839 +0000 UTC m=+7.420341990" Sep 16 05:04:45.231331 containerd[1578]: time="2025-09-16T05:04:45.231257250Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:45.232861 containerd[1578]: time="2025-09-16T05:04:45.232625826Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 16 05:04:45.234116 containerd[1578]: time="2025-09-16T05:04:45.234059826Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:45.236458 containerd[1578]: time="2025-09-16T05:04:45.236301036Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.993215375s" Sep 16 05:04:45.236458 containerd[1578]: time="2025-09-16T05:04:45.236349206Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 16 05:04:45.238737 containerd[1578]: time="2025-09-16T05:04:45.238382694Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 16 05:04:45.242946 containerd[1578]: time="2025-09-16T05:04:45.242902019Z" level=info msg="CreateContainer within sandbox \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 16 05:04:45.256117 containerd[1578]: time="2025-09-16T05:04:45.255427104Z" level=info msg="Container 5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:45.271914 containerd[1578]: time="2025-09-16T05:04:45.271857715Z" level=info msg="CreateContainer within sandbox \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\"" Sep 16 05:04:45.274116 containerd[1578]: time="2025-09-16T05:04:45.272769799Z" level=info msg="StartContainer for \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\"" Sep 16 05:04:45.275679 containerd[1578]: time="2025-09-16T05:04:45.275639772Z" level=info msg="connecting to shim 5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac" address="unix:///run/containerd/s/2d5163f4109d28c38c83fb8d8d4c2a2021506e92d7a8982a0ba38c8f6e08df5e" protocol=ttrpc version=3 Sep 16 05:04:45.317348 systemd[1]: Started cri-containerd-5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac.scope - libcontainer container 5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac. Sep 16 05:04:45.372994 containerd[1578]: time="2025-09-16T05:04:45.372938295Z" level=info msg="StartContainer for \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\" returns successfully" Sep 16 05:04:50.959589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671412767.mount: Deactivated successfully. Sep 16 05:04:53.976321 containerd[1578]: time="2025-09-16T05:04:53.976238026Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:53.977953 containerd[1578]: time="2025-09-16T05:04:53.977655745Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 16 05:04:53.979184 containerd[1578]: time="2025-09-16T05:04:53.979060170Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:04:53.981317 containerd[1578]: time="2025-09-16T05:04:53.981271392Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.742840683s" Sep 16 05:04:53.981530 containerd[1578]: time="2025-09-16T05:04:53.981500094Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 16 05:04:53.988155 containerd[1578]: time="2025-09-16T05:04:53.987784519Z" level=info msg="CreateContainer within sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 05:04:54.001117 containerd[1578]: time="2025-09-16T05:04:53.997416296Z" level=info msg="Container e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:54.009673 containerd[1578]: time="2025-09-16T05:04:54.009616219Z" level=info msg="CreateContainer within sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\"" Sep 16 05:04:54.011488 containerd[1578]: time="2025-09-16T05:04:54.011312699Z" level=info msg="StartContainer for \"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\"" Sep 16 05:04:54.013258 containerd[1578]: time="2025-09-16T05:04:54.013215414Z" level=info msg="connecting to shim e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6" address="unix:///run/containerd/s/14aa4eaff69a4691c15b582f8b40da8045603763a48ca6feed3c49df6d5baf18" protocol=ttrpc version=3 Sep 16 05:04:54.047332 systemd[1]: Started cri-containerd-e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6.scope - libcontainer container e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6. Sep 16 05:04:54.096372 containerd[1578]: time="2025-09-16T05:04:54.096304988Z" level=info msg="StartContainer for \"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\" returns successfully" Sep 16 05:04:54.122173 systemd[1]: cri-containerd-e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6.scope: Deactivated successfully. Sep 16 05:04:54.127110 containerd[1578]: time="2025-09-16T05:04:54.126860542Z" level=info msg="received exit event container_id:\"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\" id:\"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\" pid:3345 exited_at:{seconds:1757999094 nanos:125891319}" Sep 16 05:04:54.127872 containerd[1578]: time="2025-09-16T05:04:54.127077778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\" id:\"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\" pid:3345 exited_at:{seconds:1757999094 nanos:125891319}" Sep 16 05:04:54.167827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6-rootfs.mount: Deactivated successfully. Sep 16 05:04:54.471307 kubelet[2882]: I0916 05:04:54.471225 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4k9ht" podStartSLOduration=10.47432137 podStartE2EDuration="12.471201066s" podCreationTimestamp="2025-09-16 05:04:42 +0000 UTC" firstStartedPulling="2025-09-16 05:04:43.240596179 +0000 UTC m=+6.318149308" lastFinishedPulling="2025-09-16 05:04:45.237475863 +0000 UTC m=+8.315029004" observedRunningTime="2025-09-16 05:04:46.765639955 +0000 UTC m=+9.843193105" watchObservedRunningTime="2025-09-16 05:04:54.471201066 +0000 UTC m=+17.548754223" Sep 16 05:04:57.466615 containerd[1578]: time="2025-09-16T05:04:57.466560227Z" level=info msg="CreateContainer within sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 05:04:57.486206 containerd[1578]: time="2025-09-16T05:04:57.485615602Z" level=info msg="Container 49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:57.509857 containerd[1578]: time="2025-09-16T05:04:57.509795116Z" level=info msg="CreateContainer within sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\"" Sep 16 05:04:57.511511 containerd[1578]: time="2025-09-16T05:04:57.511467684Z" level=info msg="StartContainer for \"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\"" Sep 16 05:04:57.514591 containerd[1578]: time="2025-09-16T05:04:57.514533475Z" level=info msg="connecting to shim 49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b" address="unix:///run/containerd/s/14aa4eaff69a4691c15b582f8b40da8045603763a48ca6feed3c49df6d5baf18" protocol=ttrpc version=3 Sep 16 05:04:57.552351 systemd[1]: Started cri-containerd-49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b.scope - libcontainer container 49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b. Sep 16 05:04:57.601391 containerd[1578]: time="2025-09-16T05:04:57.601333725Z" level=info msg="StartContainer for \"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\" returns successfully" Sep 16 05:04:57.626829 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 05:04:57.628068 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:04:57.629476 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:04:57.634705 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:04:57.639980 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 05:04:57.646384 systemd[1]: cri-containerd-49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b.scope: Deactivated successfully. Sep 16 05:04:57.655008 containerd[1578]: time="2025-09-16T05:04:57.654840533Z" level=info msg="received exit event container_id:\"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\" id:\"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\" pid:3397 exited_at:{seconds:1757999097 nanos:654367503}" Sep 16 05:04:57.655722 containerd[1578]: time="2025-09-16T05:04:57.655654865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\" id:\"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\" pid:3397 exited_at:{seconds:1757999097 nanos:654367503}" Sep 16 05:04:57.691141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:04:58.469882 containerd[1578]: time="2025-09-16T05:04:58.469829924Z" level=info msg="CreateContainer within sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 05:04:58.483875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b-rootfs.mount: Deactivated successfully. Sep 16 05:04:58.502178 containerd[1578]: time="2025-09-16T05:04:58.500434338Z" level=info msg="Container d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:58.519563 containerd[1578]: time="2025-09-16T05:04:58.519477608Z" level=info msg="CreateContainer within sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\"" Sep 16 05:04:58.522048 containerd[1578]: time="2025-09-16T05:04:58.520348072Z" level=info msg="StartContainer for \"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\"" Sep 16 05:04:58.522894 containerd[1578]: time="2025-09-16T05:04:58.522852252Z" level=info msg="connecting to shim d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525" address="unix:///run/containerd/s/14aa4eaff69a4691c15b582f8b40da8045603763a48ca6feed3c49df6d5baf18" protocol=ttrpc version=3 Sep 16 05:04:58.557438 systemd[1]: Started cri-containerd-d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525.scope - libcontainer container d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525. Sep 16 05:04:58.625738 containerd[1578]: time="2025-09-16T05:04:58.625692373Z" level=info msg="StartContainer for \"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\" returns successfully" Sep 16 05:04:58.626046 systemd[1]: cri-containerd-d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525.scope: Deactivated successfully. Sep 16 05:04:58.629870 containerd[1578]: time="2025-09-16T05:04:58.629806466Z" level=info msg="received exit event container_id:\"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\" id:\"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\" pid:3443 exited_at:{seconds:1757999098 nanos:629521978}" Sep 16 05:04:58.630992 containerd[1578]: time="2025-09-16T05:04:58.630910700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\" id:\"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\" pid:3443 exited_at:{seconds:1757999098 nanos:629521978}" Sep 16 05:04:58.669633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525-rootfs.mount: Deactivated successfully. Sep 16 05:04:59.482874 containerd[1578]: time="2025-09-16T05:04:59.481829826Z" level=info msg="CreateContainer within sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 05:04:59.500477 containerd[1578]: time="2025-09-16T05:04:59.500403423Z" level=info msg="Container 9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:04:59.522348 containerd[1578]: time="2025-09-16T05:04:59.522287570Z" level=info msg="CreateContainer within sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\"" Sep 16 05:04:59.525657 containerd[1578]: time="2025-09-16T05:04:59.524020725Z" level=info msg="StartContainer for \"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\"" Sep 16 05:04:59.526319 containerd[1578]: time="2025-09-16T05:04:59.526250863Z" level=info msg="connecting to shim 9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4" address="unix:///run/containerd/s/14aa4eaff69a4691c15b582f8b40da8045603763a48ca6feed3c49df6d5baf18" protocol=ttrpc version=3 Sep 16 05:04:59.568410 systemd[1]: Started cri-containerd-9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4.scope - libcontainer container 9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4. Sep 16 05:04:59.613300 systemd[1]: cri-containerd-9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4.scope: Deactivated successfully. Sep 16 05:04:59.614156 containerd[1578]: time="2025-09-16T05:04:59.613830191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\" id:\"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\" pid:3488 exited_at:{seconds:1757999099 nanos:613349061}" Sep 16 05:04:59.616816 containerd[1578]: time="2025-09-16T05:04:59.616618486Z" level=info msg="received exit event container_id:\"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\" id:\"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\" pid:3488 exited_at:{seconds:1757999099 nanos:613349061}" Sep 16 05:04:59.630352 containerd[1578]: time="2025-09-16T05:04:59.630290135Z" level=info msg="StartContainer for \"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\" returns successfully" Sep 16 05:04:59.655790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4-rootfs.mount: Deactivated successfully. Sep 16 05:05:00.488529 containerd[1578]: time="2025-09-16T05:05:00.488461485Z" level=info msg="CreateContainer within sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 05:05:00.515681 containerd[1578]: time="2025-09-16T05:05:00.515215911Z" level=info msg="Container 5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:05:00.528630 containerd[1578]: time="2025-09-16T05:05:00.528565956Z" level=info msg="CreateContainer within sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\"" Sep 16 05:05:00.530136 containerd[1578]: time="2025-09-16T05:05:00.529360055Z" level=info msg="StartContainer for \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\"" Sep 16 05:05:00.531104 containerd[1578]: time="2025-09-16T05:05:00.530978258Z" level=info msg="connecting to shim 5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4" address="unix:///run/containerd/s/14aa4eaff69a4691c15b582f8b40da8045603763a48ca6feed3c49df6d5baf18" protocol=ttrpc version=3 Sep 16 05:05:00.568468 systemd[1]: Started cri-containerd-5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4.scope - libcontainer container 5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4. Sep 16 05:05:00.626121 containerd[1578]: time="2025-09-16T05:05:00.626054261Z" level=info msg="StartContainer for \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" returns successfully" Sep 16 05:05:00.750150 containerd[1578]: time="2025-09-16T05:05:00.749970740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" id:\"f9fa64f37a044d2ba57ac14ba2c4df843c970cb4eaf0c3b1f5025f0bd7a28cae\" pid:3555 exited_at:{seconds:1757999100 nanos:748551881}" Sep 16 05:05:00.762474 kubelet[2882]: I0916 05:05:00.762334 2882 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 05:05:00.824460 systemd[1]: Created slice kubepods-burstable-pod1e614f71_d49c_4a9c_adca_6b07606a775b.slice - libcontainer container kubepods-burstable-pod1e614f71_d49c_4a9c_adca_6b07606a775b.slice. Sep 16 05:05:00.839748 systemd[1]: Created slice kubepods-burstable-podedb44945_76df_4ffb_b4c3_3b8b661ad727.slice - libcontainer container kubepods-burstable-podedb44945_76df_4ffb_b4c3_3b8b661ad727.slice. Sep 16 05:05:00.913926 kubelet[2882]: I0916 05:05:00.913691 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vv2x\" (UniqueName: \"kubernetes.io/projected/1e614f71-d49c-4a9c-adca-6b07606a775b-kube-api-access-6vv2x\") pod \"coredns-674b8bbfcf-hqw28\" (UID: \"1e614f71-d49c-4a9c-adca-6b07606a775b\") " pod="kube-system/coredns-674b8bbfcf-hqw28" Sep 16 05:05:00.913926 kubelet[2882]: I0916 05:05:00.913771 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edb44945-76df-4ffb-b4c3-3b8b661ad727-config-volume\") pod \"coredns-674b8bbfcf-shk9m\" (UID: \"edb44945-76df-4ffb-b4c3-3b8b661ad727\") " pod="kube-system/coredns-674b8bbfcf-shk9m" Sep 16 05:05:00.913926 kubelet[2882]: I0916 05:05:00.913807 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhq66\" (UniqueName: \"kubernetes.io/projected/edb44945-76df-4ffb-b4c3-3b8b661ad727-kube-api-access-zhq66\") pod \"coredns-674b8bbfcf-shk9m\" (UID: \"edb44945-76df-4ffb-b4c3-3b8b661ad727\") " pod="kube-system/coredns-674b8bbfcf-shk9m" Sep 16 05:05:00.913926 kubelet[2882]: I0916 05:05:00.913837 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e614f71-d49c-4a9c-adca-6b07606a775b-config-volume\") pod \"coredns-674b8bbfcf-hqw28\" (UID: \"1e614f71-d49c-4a9c-adca-6b07606a775b\") " pod="kube-system/coredns-674b8bbfcf-hqw28" Sep 16 05:05:01.133422 containerd[1578]: time="2025-09-16T05:05:01.132846420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hqw28,Uid:1e614f71-d49c-4a9c-adca-6b07606a775b,Namespace:kube-system,Attempt:0,}" Sep 16 05:05:01.152614 containerd[1578]: time="2025-09-16T05:05:01.152509155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-shk9m,Uid:edb44945-76df-4ffb-b4c3-3b8b661ad727,Namespace:kube-system,Attempt:0,}" Sep 16 05:05:01.529194 kubelet[2882]: I0916 05:05:01.528831 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j95ld" podStartSLOduration=8.964359272 podStartE2EDuration="19.528803986s" podCreationTimestamp="2025-09-16 05:04:42 +0000 UTC" firstStartedPulling="2025-09-16 05:04:43.418343794 +0000 UTC m=+6.495896927" lastFinishedPulling="2025-09-16 05:04:53.982788516 +0000 UTC m=+17.060341641" observedRunningTime="2025-09-16 05:05:01.521988604 +0000 UTC m=+24.599541793" watchObservedRunningTime="2025-09-16 05:05:01.528803986 +0000 UTC m=+24.606357137" Sep 16 05:05:03.104508 systemd-networkd[1466]: cilium_host: Link UP Sep 16 05:05:03.105736 systemd-networkd[1466]: cilium_net: Link UP Sep 16 05:05:03.106112 systemd-networkd[1466]: cilium_net: Gained carrier Sep 16 05:05:03.108550 systemd-networkd[1466]: cilium_host: Gained carrier Sep 16 05:05:03.196867 systemd-networkd[1466]: cilium_net: Gained IPv6LL Sep 16 05:05:03.268455 systemd-networkd[1466]: cilium_vxlan: Link UP Sep 16 05:05:03.269046 systemd-networkd[1466]: cilium_vxlan: Gained carrier Sep 16 05:05:03.532275 systemd-networkd[1466]: cilium_host: Gained IPv6LL Sep 16 05:05:03.564240 kernel: NET: Registered PF_ALG protocol family Sep 16 05:05:04.446162 systemd-networkd[1466]: cilium_vxlan: Gained IPv6LL Sep 16 05:05:04.464218 systemd-networkd[1466]: lxc_health: Link UP Sep 16 05:05:04.487152 systemd-networkd[1466]: lxc_health: Gained carrier Sep 16 05:05:04.713121 kernel: eth0: renamed from tmpb3cb7 Sep 16 05:05:04.717385 systemd-networkd[1466]: lxcad552d0c3204: Link UP Sep 16 05:05:04.722138 systemd-networkd[1466]: lxcad552d0c3204: Gained carrier Sep 16 05:05:04.760056 systemd-networkd[1466]: lxc57b5d14672b9: Link UP Sep 16 05:05:04.770546 kernel: eth0: renamed from tmpd5c4f Sep 16 05:05:04.776919 systemd-networkd[1466]: lxc57b5d14672b9: Gained carrier Sep 16 05:05:05.660416 systemd-networkd[1466]: lxc_health: Gained IPv6LL Sep 16 05:05:06.172397 systemd-networkd[1466]: lxcad552d0c3204: Gained IPv6LL Sep 16 05:05:06.428446 systemd-networkd[1466]: lxc57b5d14672b9: Gained IPv6LL Sep 16 05:05:09.028565 ntpd[1695]: Listen normally on 6 cilium_host 192.168.0.144:123 Sep 16 05:05:09.028675 ntpd[1695]: Listen normally on 7 cilium_net [fe80::ac0c:39ff:fe2c:6a6a%4]:123 Sep 16 05:05:09.029248 ntpd[1695]: 16 Sep 05:05:09 ntpd[1695]: Listen normally on 6 cilium_host 192.168.0.144:123 Sep 16 05:05:09.029248 ntpd[1695]: 16 Sep 05:05:09 ntpd[1695]: Listen normally on 7 cilium_net [fe80::ac0c:39ff:fe2c:6a6a%4]:123 Sep 16 05:05:09.029248 ntpd[1695]: 16 Sep 05:05:09 ntpd[1695]: Listen normally on 8 cilium_host [fe80::c826:9eff:fe2d:830%5]:123 Sep 16 05:05:09.029248 ntpd[1695]: 16 Sep 05:05:09 ntpd[1695]: Listen normally on 9 cilium_vxlan [fe80::609d:dbff:fe57:a432%6]:123 Sep 16 05:05:09.029248 ntpd[1695]: 16 Sep 05:05:09 ntpd[1695]: Listen normally on 10 lxc_health [fe80::7cd8:41ff:fea1:a92%8]:123 Sep 16 05:05:09.029248 ntpd[1695]: 16 Sep 05:05:09 ntpd[1695]: Listen normally on 11 lxcad552d0c3204 [fe80::84b0:3bff:fe86:bdf8%10]:123 Sep 16 05:05:09.029248 ntpd[1695]: 16 Sep 05:05:09 ntpd[1695]: Listen normally on 12 lxc57b5d14672b9 [fe80::3c18:3cff:fe05:a035%12]:123 Sep 16 05:05:09.028719 ntpd[1695]: Listen normally on 8 cilium_host [fe80::c826:9eff:fe2d:830%5]:123 Sep 16 05:05:09.028759 ntpd[1695]: Listen normally on 9 cilium_vxlan [fe80::609d:dbff:fe57:a432%6]:123 Sep 16 05:05:09.028805 ntpd[1695]: Listen normally on 10 lxc_health [fe80::7cd8:41ff:fea1:a92%8]:123 Sep 16 05:05:09.028847 ntpd[1695]: Listen normally on 11 lxcad552d0c3204 [fe80::84b0:3bff:fe86:bdf8%10]:123 Sep 16 05:05:09.028888 ntpd[1695]: Listen normally on 12 lxc57b5d14672b9 [fe80::3c18:3cff:fe05:a035%12]:123 Sep 16 05:05:10.053026 containerd[1578]: time="2025-09-16T05:05:10.052960994Z" level=info msg="connecting to shim d5c4fd3fe79d027e352010266e1d80dd38c6466c0d01e005b97388369efbae88" address="unix:///run/containerd/s/d90b4e16f9ca0a25e282a77c9abba4e3b55ff7b30c982159519bc858b5d11f1e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:05:10.065849 containerd[1578]: time="2025-09-16T05:05:10.065790316Z" level=info msg="connecting to shim b3cb7f9c1369eb8cf46c8b5c302ba20cdcacec07d58c5d7dc3bf15b5ef72ebda" address="unix:///run/containerd/s/b894fa6f92394d51d2896507f2b1f0aa6db24527229293d7a934bebb65d2c133" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:05:10.140364 systemd[1]: Started cri-containerd-d5c4fd3fe79d027e352010266e1d80dd38c6466c0d01e005b97388369efbae88.scope - libcontainer container d5c4fd3fe79d027e352010266e1d80dd38c6466c0d01e005b97388369efbae88. Sep 16 05:05:10.159994 systemd[1]: Started cri-containerd-b3cb7f9c1369eb8cf46c8b5c302ba20cdcacec07d58c5d7dc3bf15b5ef72ebda.scope - libcontainer container b3cb7f9c1369eb8cf46c8b5c302ba20cdcacec07d58c5d7dc3bf15b5ef72ebda. Sep 16 05:05:10.288894 containerd[1578]: time="2025-09-16T05:05:10.288834891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-shk9m,Uid:edb44945-76df-4ffb-b4c3-3b8b661ad727,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5c4fd3fe79d027e352010266e1d80dd38c6466c0d01e005b97388369efbae88\"" Sep 16 05:05:10.306234 containerd[1578]: time="2025-09-16T05:05:10.304351909Z" level=info msg="CreateContainer within sandbox \"d5c4fd3fe79d027e352010266e1d80dd38c6466c0d01e005b97388369efbae88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 05:05:10.314058 containerd[1578]: time="2025-09-16T05:05:10.313868707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hqw28,Uid:1e614f71-d49c-4a9c-adca-6b07606a775b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3cb7f9c1369eb8cf46c8b5c302ba20cdcacec07d58c5d7dc3bf15b5ef72ebda\"" Sep 16 05:05:10.323764 containerd[1578]: time="2025-09-16T05:05:10.323703411Z" level=info msg="CreateContainer within sandbox \"b3cb7f9c1369eb8cf46c8b5c302ba20cdcacec07d58c5d7dc3bf15b5ef72ebda\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 05:05:10.335870 containerd[1578]: time="2025-09-16T05:05:10.333706954Z" level=info msg="Container 1e5f5522fe1804aea8c958f924d02144d675eaa31dac93e2e11771a2198ee16b: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:05:10.342265 containerd[1578]: time="2025-09-16T05:05:10.342223716Z" level=info msg="Container 6b13a53129927c8c003f042d0b5002e8a79ea5c36f8df75020a7c4b382d911e4: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:05:10.352866 containerd[1578]: time="2025-09-16T05:05:10.352804104Z" level=info msg="CreateContainer within sandbox \"d5c4fd3fe79d027e352010266e1d80dd38c6466c0d01e005b97388369efbae88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e5f5522fe1804aea8c958f924d02144d675eaa31dac93e2e11771a2198ee16b\"" Sep 16 05:05:10.356854 containerd[1578]: time="2025-09-16T05:05:10.356467523Z" level=info msg="StartContainer for \"1e5f5522fe1804aea8c958f924d02144d675eaa31dac93e2e11771a2198ee16b\"" Sep 16 05:05:10.358282 containerd[1578]: time="2025-09-16T05:05:10.358233013Z" level=info msg="connecting to shim 1e5f5522fe1804aea8c958f924d02144d675eaa31dac93e2e11771a2198ee16b" address="unix:///run/containerd/s/d90b4e16f9ca0a25e282a77c9abba4e3b55ff7b30c982159519bc858b5d11f1e" protocol=ttrpc version=3 Sep 16 05:05:10.365429 containerd[1578]: time="2025-09-16T05:05:10.365381774Z" level=info msg="CreateContainer within sandbox \"b3cb7f9c1369eb8cf46c8b5c302ba20cdcacec07d58c5d7dc3bf15b5ef72ebda\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b13a53129927c8c003f042d0b5002e8a79ea5c36f8df75020a7c4b382d911e4\"" Sep 16 05:05:10.366705 containerd[1578]: time="2025-09-16T05:05:10.366656818Z" level=info msg="StartContainer for \"6b13a53129927c8c003f042d0b5002e8a79ea5c36f8df75020a7c4b382d911e4\"" Sep 16 05:05:10.369070 containerd[1578]: time="2025-09-16T05:05:10.369030528Z" level=info msg="connecting to shim 6b13a53129927c8c003f042d0b5002e8a79ea5c36f8df75020a7c4b382d911e4" address="unix:///run/containerd/s/b894fa6f92394d51d2896507f2b1f0aa6db24527229293d7a934bebb65d2c133" protocol=ttrpc version=3 Sep 16 05:05:10.399681 systemd[1]: Started cri-containerd-1e5f5522fe1804aea8c958f924d02144d675eaa31dac93e2e11771a2198ee16b.scope - libcontainer container 1e5f5522fe1804aea8c958f924d02144d675eaa31dac93e2e11771a2198ee16b. Sep 16 05:05:10.411362 systemd[1]: Started cri-containerd-6b13a53129927c8c003f042d0b5002e8a79ea5c36f8df75020a7c4b382d911e4.scope - libcontainer container 6b13a53129927c8c003f042d0b5002e8a79ea5c36f8df75020a7c4b382d911e4. Sep 16 05:05:10.483933 containerd[1578]: time="2025-09-16T05:05:10.483784396Z" level=info msg="StartContainer for \"1e5f5522fe1804aea8c958f924d02144d675eaa31dac93e2e11771a2198ee16b\" returns successfully" Sep 16 05:05:10.487464 containerd[1578]: time="2025-09-16T05:05:10.487304339Z" level=info msg="StartContainer for \"6b13a53129927c8c003f042d0b5002e8a79ea5c36f8df75020a7c4b382d911e4\" returns successfully" Sep 16 05:05:10.564450 kubelet[2882]: I0916 05:05:10.563482 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hqw28" podStartSLOduration=28.563456631 podStartE2EDuration="28.563456631s" podCreationTimestamp="2025-09-16 05:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:05:10.554688371 +0000 UTC m=+33.632241522" watchObservedRunningTime="2025-09-16 05:05:10.563456631 +0000 UTC m=+33.641009782" Sep 16 05:05:11.024424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount661306935.mount: Deactivated successfully. Sep 16 05:05:11.552116 kubelet[2882]: I0916 05:05:11.552030 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-shk9m" podStartSLOduration=29.552000005 podStartE2EDuration="29.552000005s" podCreationTimestamp="2025-09-16 05:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:05:10.6158774 +0000 UTC m=+33.693430551" watchObservedRunningTime="2025-09-16 05:05:11.552000005 +0000 UTC m=+34.629553159" Sep 16 05:05:27.883824 systemd[1]: Started sshd@9-10.128.0.3:22-139.178.68.195:52472.service - OpenSSH per-connection server daemon (139.178.68.195:52472). Sep 16 05:05:28.197115 sshd[4194]: Accepted publickey for core from 139.178.68.195 port 52472 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:28.199026 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:28.206295 systemd-logind[1555]: New session 10 of user core. Sep 16 05:05:28.217411 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 05:05:28.524884 sshd[4197]: Connection closed by 139.178.68.195 port 52472 Sep 16 05:05:28.526145 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:28.532728 systemd[1]: sshd@9-10.128.0.3:22-139.178.68.195:52472.service: Deactivated successfully. Sep 16 05:05:28.536541 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 05:05:28.538558 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. Sep 16 05:05:28.541375 systemd-logind[1555]: Removed session 10. Sep 16 05:05:33.580684 systemd[1]: Started sshd@10-10.128.0.3:22-139.178.68.195:55398.service - OpenSSH per-connection server daemon (139.178.68.195:55398). Sep 16 05:05:33.893982 sshd[4217]: Accepted publickey for core from 139.178.68.195 port 55398 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:33.895824 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:33.902933 systemd-logind[1555]: New session 11 of user core. Sep 16 05:05:33.909311 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 05:05:34.192171 sshd[4220]: Connection closed by 139.178.68.195 port 55398 Sep 16 05:05:34.193078 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:34.199592 systemd[1]: sshd@10-10.128.0.3:22-139.178.68.195:55398.service: Deactivated successfully. Sep 16 05:05:34.202710 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 05:05:34.204902 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. Sep 16 05:05:34.207643 systemd-logind[1555]: Removed session 11. Sep 16 05:05:39.247699 systemd[1]: Started sshd@11-10.128.0.3:22-139.178.68.195:55414.service - OpenSSH per-connection server daemon (139.178.68.195:55414). Sep 16 05:05:39.561774 sshd[4235]: Accepted publickey for core from 139.178.68.195 port 55414 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:39.564004 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:39.570583 systemd-logind[1555]: New session 12 of user core. Sep 16 05:05:39.577302 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 05:05:39.866350 sshd[4238]: Connection closed by 139.178.68.195 port 55414 Sep 16 05:05:39.867799 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:39.874275 systemd[1]: sshd@11-10.128.0.3:22-139.178.68.195:55414.service: Deactivated successfully. Sep 16 05:05:39.877541 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 05:05:39.879437 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. Sep 16 05:05:39.882029 systemd-logind[1555]: Removed session 12. Sep 16 05:05:44.928708 systemd[1]: Started sshd@12-10.128.0.3:22-139.178.68.195:43426.service - OpenSSH per-connection server daemon (139.178.68.195:43426). Sep 16 05:05:45.240902 sshd[4252]: Accepted publickey for core from 139.178.68.195 port 43426 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:45.242869 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:45.252069 systemd-logind[1555]: New session 13 of user core. Sep 16 05:05:45.261408 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 05:05:45.541686 sshd[4255]: Connection closed by 139.178.68.195 port 43426 Sep 16 05:05:45.542629 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:45.550587 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. Sep 16 05:05:45.551406 systemd[1]: sshd@12-10.128.0.3:22-139.178.68.195:43426.service: Deactivated successfully. Sep 16 05:05:45.555376 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 05:05:45.558617 systemd-logind[1555]: Removed session 13. Sep 16 05:05:45.601551 systemd[1]: Started sshd@13-10.128.0.3:22-139.178.68.195:43430.service - OpenSSH per-connection server daemon (139.178.68.195:43430). Sep 16 05:05:45.905223 sshd[4268]: Accepted publickey for core from 139.178.68.195 port 43430 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:45.906956 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:45.914576 systemd-logind[1555]: New session 14 of user core. Sep 16 05:05:45.925422 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 05:05:46.253331 sshd[4271]: Connection closed by 139.178.68.195 port 43430 Sep 16 05:05:46.254455 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:46.261201 systemd[1]: sshd@13-10.128.0.3:22-139.178.68.195:43430.service: Deactivated successfully. Sep 16 05:05:46.265108 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 05:05:46.266311 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. Sep 16 05:05:46.268970 systemd-logind[1555]: Removed session 14. Sep 16 05:05:46.309053 systemd[1]: Started sshd@14-10.128.0.3:22-139.178.68.195:43446.service - OpenSSH per-connection server daemon (139.178.68.195:43446). Sep 16 05:05:46.627800 sshd[4281]: Accepted publickey for core from 139.178.68.195 port 43446 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:46.629968 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:46.637779 systemd-logind[1555]: New session 15 of user core. Sep 16 05:05:46.654474 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 05:05:46.921731 sshd[4284]: Connection closed by 139.178.68.195 port 43446 Sep 16 05:05:46.923167 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:46.929297 systemd[1]: sshd@14-10.128.0.3:22-139.178.68.195:43446.service: Deactivated successfully. Sep 16 05:05:46.932155 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 05:05:46.933964 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. Sep 16 05:05:46.936372 systemd-logind[1555]: Removed session 15. Sep 16 05:05:51.980407 systemd[1]: Started sshd@15-10.128.0.3:22-139.178.68.195:60130.service - OpenSSH per-connection server daemon (139.178.68.195:60130). Sep 16 05:05:52.290258 sshd[4297]: Accepted publickey for core from 139.178.68.195 port 60130 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:52.293519 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:52.302169 systemd-logind[1555]: New session 16 of user core. Sep 16 05:05:52.307367 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 05:05:52.585998 sshd[4300]: Connection closed by 139.178.68.195 port 60130 Sep 16 05:05:52.586883 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:52.593628 systemd[1]: sshd@15-10.128.0.3:22-139.178.68.195:60130.service: Deactivated successfully. Sep 16 05:05:52.596601 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 05:05:52.598732 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. Sep 16 05:05:52.602032 systemd-logind[1555]: Removed session 16. Sep 16 05:05:57.649707 systemd[1]: Started sshd@16-10.128.0.3:22-139.178.68.195:60140.service - OpenSSH per-connection server daemon (139.178.68.195:60140). Sep 16 05:05:57.951834 sshd[4314]: Accepted publickey for core from 139.178.68.195 port 60140 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:05:57.953768 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:05:57.961538 systemd-logind[1555]: New session 17 of user core. Sep 16 05:05:57.979470 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 05:05:58.249226 sshd[4317]: Connection closed by 139.178.68.195 port 60140 Sep 16 05:05:58.250562 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Sep 16 05:05:58.256292 systemd[1]: sshd@16-10.128.0.3:22-139.178.68.195:60140.service: Deactivated successfully. Sep 16 05:05:58.260332 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 05:05:58.261856 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. Sep 16 05:05:58.264180 systemd-logind[1555]: Removed session 17. Sep 16 05:06:03.304781 systemd[1]: Started sshd@17-10.128.0.3:22-139.178.68.195:45594.service - OpenSSH per-connection server daemon (139.178.68.195:45594). Sep 16 05:06:03.611216 sshd[4329]: Accepted publickey for core from 139.178.68.195 port 45594 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:03.613310 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:03.620904 systemd-logind[1555]: New session 18 of user core. Sep 16 05:06:03.626352 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 05:06:03.906613 sshd[4332]: Connection closed by 139.178.68.195 port 45594 Sep 16 05:06:03.907796 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:03.913348 systemd[1]: sshd@17-10.128.0.3:22-139.178.68.195:45594.service: Deactivated successfully. Sep 16 05:06:03.917135 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 05:06:03.920688 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. Sep 16 05:06:03.922711 systemd-logind[1555]: Removed session 18. Sep 16 05:06:03.965933 systemd[1]: Started sshd@18-10.128.0.3:22-139.178.68.195:45602.service - OpenSSH per-connection server daemon (139.178.68.195:45602). Sep 16 05:06:04.275873 sshd[4344]: Accepted publickey for core from 139.178.68.195 port 45602 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:04.277724 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:04.285238 systemd-logind[1555]: New session 19 of user core. Sep 16 05:06:04.289346 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 05:06:04.641809 sshd[4347]: Connection closed by 139.178.68.195 port 45602 Sep 16 05:06:04.643064 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:04.649556 systemd[1]: sshd@18-10.128.0.3:22-139.178.68.195:45602.service: Deactivated successfully. Sep 16 05:06:04.653312 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 05:06:04.654685 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. Sep 16 05:06:04.658392 systemd-logind[1555]: Removed session 19. Sep 16 05:06:04.696672 systemd[1]: Started sshd@19-10.128.0.3:22-139.178.68.195:45608.service - OpenSSH per-connection server daemon (139.178.68.195:45608). Sep 16 05:06:05.007439 sshd[4356]: Accepted publickey for core from 139.178.68.195 port 45608 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:05.009256 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:05.017167 systemd-logind[1555]: New session 20 of user core. Sep 16 05:06:05.026388 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 05:06:05.852163 sshd[4359]: Connection closed by 139.178.68.195 port 45608 Sep 16 05:06:05.856394 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:05.866851 systemd[1]: sshd@19-10.128.0.3:22-139.178.68.195:45608.service: Deactivated successfully. Sep 16 05:06:05.874769 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 05:06:05.876507 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. Sep 16 05:06:05.880690 systemd-logind[1555]: Removed session 20. Sep 16 05:06:05.917479 systemd[1]: Started sshd@20-10.128.0.3:22-139.178.68.195:45624.service - OpenSSH per-connection server daemon (139.178.68.195:45624). Sep 16 05:06:06.253052 sshd[4376]: Accepted publickey for core from 139.178.68.195 port 45624 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:06.254711 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:06.263027 systemd-logind[1555]: New session 21 of user core. Sep 16 05:06:06.269391 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 05:06:06.694320 sshd[4379]: Connection closed by 139.178.68.195 port 45624 Sep 16 05:06:06.695465 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:06.702863 systemd[1]: sshd@20-10.128.0.3:22-139.178.68.195:45624.service: Deactivated successfully. Sep 16 05:06:06.706912 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 05:06:06.708775 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. Sep 16 05:06:06.710936 systemd-logind[1555]: Removed session 21. Sep 16 05:06:06.750475 systemd[1]: Started sshd@21-10.128.0.3:22-139.178.68.195:45632.service - OpenSSH per-connection server daemon (139.178.68.195:45632). Sep 16 05:06:07.066598 sshd[4389]: Accepted publickey for core from 139.178.68.195 port 45632 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:07.067415 sshd-session[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:07.074175 systemd-logind[1555]: New session 22 of user core. Sep 16 05:06:07.088449 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 05:06:07.361443 sshd[4392]: Connection closed by 139.178.68.195 port 45632 Sep 16 05:06:07.362829 sshd-session[4389]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:07.368989 systemd[1]: sshd@21-10.128.0.3:22-139.178.68.195:45632.service: Deactivated successfully. Sep 16 05:06:07.372701 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 05:06:07.375591 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. Sep 16 05:06:07.379350 systemd-logind[1555]: Removed session 22. Sep 16 05:06:12.415847 systemd[1]: Started sshd@22-10.128.0.3:22-139.178.68.195:59034.service - OpenSSH per-connection server daemon (139.178.68.195:59034). Sep 16 05:06:12.723741 sshd[4406]: Accepted publickey for core from 139.178.68.195 port 59034 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:12.725653 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:12.732201 systemd-logind[1555]: New session 23 of user core. Sep 16 05:06:12.739402 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 05:06:13.014794 sshd[4409]: Connection closed by 139.178.68.195 port 59034 Sep 16 05:06:13.016185 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:13.022257 systemd[1]: sshd@22-10.128.0.3:22-139.178.68.195:59034.service: Deactivated successfully. Sep 16 05:06:13.025912 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 05:06:13.027776 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. Sep 16 05:06:13.030935 systemd-logind[1555]: Removed session 23. Sep 16 05:06:18.072063 systemd[1]: Started sshd@23-10.128.0.3:22-139.178.68.195:59050.service - OpenSSH per-connection server daemon (139.178.68.195:59050). Sep 16 05:06:18.382358 sshd[4424]: Accepted publickey for core from 139.178.68.195 port 59050 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:18.384515 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:18.392286 systemd-logind[1555]: New session 24 of user core. Sep 16 05:06:18.399342 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 05:06:18.679856 sshd[4427]: Connection closed by 139.178.68.195 port 59050 Sep 16 05:06:18.681191 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:18.687732 systemd[1]: sshd@23-10.128.0.3:22-139.178.68.195:59050.service: Deactivated successfully. Sep 16 05:06:18.691765 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 05:06:18.694188 systemd-logind[1555]: Session 24 logged out. Waiting for processes to exit. Sep 16 05:06:18.696614 systemd-logind[1555]: Removed session 24. Sep 16 05:06:23.735590 systemd[1]: Started sshd@24-10.128.0.3:22-139.178.68.195:34652.service - OpenSSH per-connection server daemon (139.178.68.195:34652). Sep 16 05:06:24.043437 sshd[4440]: Accepted publickey for core from 139.178.68.195 port 34652 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:24.045426 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:24.053161 systemd-logind[1555]: New session 25 of user core. Sep 16 05:06:24.059340 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 05:06:24.336386 sshd[4444]: Connection closed by 139.178.68.195 port 34652 Sep 16 05:06:24.337647 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:24.343896 systemd[1]: sshd@24-10.128.0.3:22-139.178.68.195:34652.service: Deactivated successfully. Sep 16 05:06:24.346704 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 05:06:24.348687 systemd-logind[1555]: Session 25 logged out. Waiting for processes to exit. Sep 16 05:06:24.351661 systemd-logind[1555]: Removed session 25. Sep 16 05:06:24.391484 systemd[1]: Started sshd@25-10.128.0.3:22-139.178.68.195:34660.service - OpenSSH per-connection server daemon (139.178.68.195:34660). Sep 16 05:06:24.705637 sshd[4456]: Accepted publickey for core from 139.178.68.195 port 34660 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:24.707637 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:24.719173 systemd-logind[1555]: New session 26 of user core. Sep 16 05:06:24.727306 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 05:06:26.673881 containerd[1578]: time="2025-09-16T05:06:26.673802506Z" level=info msg="StopContainer for \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\" with timeout 30 (s)" Sep 16 05:06:26.675219 containerd[1578]: time="2025-09-16T05:06:26.675176911Z" level=info msg="Stop container \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\" with signal terminated" Sep 16 05:06:26.693617 systemd[1]: cri-containerd-5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac.scope: Deactivated successfully. Sep 16 05:06:26.698876 containerd[1578]: time="2025-09-16T05:06:26.698730404Z" level=info msg="received exit event container_id:\"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\" id:\"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\" pid:3286 exited_at:{seconds:1757999186 nanos:698349062}" Sep 16 05:06:26.699208 containerd[1578]: time="2025-09-16T05:06:26.698768306Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\" id:\"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\" pid:3286 exited_at:{seconds:1757999186 nanos:698349062}" Sep 16 05:06:26.753174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac-rootfs.mount: Deactivated successfully. Sep 16 05:06:26.774381 containerd[1578]: time="2025-09-16T05:06:26.774327270Z" level=info msg="StopContainer for \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\" returns successfully" Sep 16 05:06:26.775330 containerd[1578]: time="2025-09-16T05:06:26.775039688Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 05:06:26.776306 containerd[1578]: time="2025-09-16T05:06:26.776264365Z" level=info msg="StopPodSandbox for \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\"" Sep 16 05:06:26.776528 containerd[1578]: time="2025-09-16T05:06:26.776352826Z" level=info msg="Container to stop \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:06:26.784610 containerd[1578]: time="2025-09-16T05:06:26.784551611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" id:\"0c756b96743372fe25bf50dbbc9d8bd57a82b54e7a30dfeffd49f8b0e1118bfa\" pid:4501 exited_at:{seconds:1757999186 nanos:783785946}" Sep 16 05:06:26.790474 containerd[1578]: time="2025-09-16T05:06:26.790273782Z" level=info msg="StopContainer for \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" with timeout 2 (s)" Sep 16 05:06:26.790993 containerd[1578]: time="2025-09-16T05:06:26.790958804Z" level=info msg="Stop container \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" with signal terminated" Sep 16 05:06:26.802383 systemd[1]: cri-containerd-e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7.scope: Deactivated successfully. Sep 16 05:06:26.808951 containerd[1578]: time="2025-09-16T05:06:26.808516403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" id:\"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" pid:2991 exit_status:137 exited_at:{seconds:1757999186 nanos:807330583}" Sep 16 05:06:26.817719 systemd-networkd[1466]: lxc_health: Link DOWN Sep 16 05:06:26.817735 systemd-networkd[1466]: lxc_health: Lost carrier Sep 16 05:06:26.838586 systemd[1]: cri-containerd-5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4.scope: Deactivated successfully. Sep 16 05:06:26.839697 systemd[1]: cri-containerd-5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4.scope: Consumed 9.733s CPU time, 126.4M memory peak, 128K read from disk, 13.3M written to disk. Sep 16 05:06:26.844114 containerd[1578]: time="2025-09-16T05:06:26.843673953Z" level=info msg="received exit event container_id:\"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" id:\"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" pid:3525 exited_at:{seconds:1757999186 nanos:843164950}" Sep 16 05:06:26.879711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7-rootfs.mount: Deactivated successfully. Sep 16 05:06:26.886158 containerd[1578]: time="2025-09-16T05:06:26.885979082Z" level=info msg="shim disconnected" id=e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7 namespace=k8s.io Sep 16 05:06:26.886158 containerd[1578]: time="2025-09-16T05:06:26.886026401Z" level=warning msg="cleaning up after shim disconnected" id=e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7 namespace=k8s.io Sep 16 05:06:26.886158 containerd[1578]: time="2025-09-16T05:06:26.886040924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 05:06:26.886700 containerd[1578]: time="2025-09-16T05:06:26.886618267Z" level=error msg="Failed to handle event container_id:\"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" id:\"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" pid:2991 exit_status:137 exited_at:{seconds:1757999186 nanos:807330583} for e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7" error="failed to handle container TaskExit event: failed to stop sandbox: ttrpc: closed" Sep 16 05:06:26.887204 containerd[1578]: time="2025-09-16T05:06:26.887170213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" id:\"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" pid:3525 exited_at:{seconds:1757999186 nanos:843164950}" Sep 16 05:06:26.894981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4-rootfs.mount: Deactivated successfully. Sep 16 05:06:26.906903 containerd[1578]: time="2025-09-16T05:06:26.906840528Z" level=info msg="StopContainer for \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" returns successfully" Sep 16 05:06:26.910328 containerd[1578]: time="2025-09-16T05:06:26.910267728Z" level=info msg="StopPodSandbox for \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\"" Sep 16 05:06:26.910473 containerd[1578]: time="2025-09-16T05:06:26.910373364Z" level=info msg="Container to stop \"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:06:26.910473 containerd[1578]: time="2025-09-16T05:06:26.910396155Z" level=info msg="Container to stop \"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:06:26.910473 containerd[1578]: time="2025-09-16T05:06:26.910411085Z" level=info msg="Container to stop \"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:06:26.910473 containerd[1578]: time="2025-09-16T05:06:26.910425147Z" level=info msg="Container to stop \"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:06:26.910473 containerd[1578]: time="2025-09-16T05:06:26.910439028Z" level=info msg="Container to stop \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:06:26.928705 systemd[1]: cri-containerd-28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127.scope: Deactivated successfully. Sep 16 05:06:26.935481 containerd[1578]: time="2025-09-16T05:06:26.935437178Z" level=info msg="received exit event sandbox_id:\"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" exit_status:137 exited_at:{seconds:1757999186 nanos:807330583}" Sep 16 05:06:26.937970 containerd[1578]: time="2025-09-16T05:06:26.937931682Z" level=info msg="TearDown network for sandbox \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" successfully" Sep 16 05:06:26.938180 containerd[1578]: time="2025-09-16T05:06:26.938150896Z" level=info msg="StopPodSandbox for \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" returns successfully" Sep 16 05:06:26.938901 containerd[1578]: time="2025-09-16T05:06:26.938857627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" id:\"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" pid:3076 exit_status:137 exited_at:{seconds:1757999186 nanos:932509635}" Sep 16 05:06:26.943910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7-shm.mount: Deactivated successfully. Sep 16 05:06:26.995058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127-rootfs.mount: Deactivated successfully. Sep 16 05:06:26.998231 containerd[1578]: time="2025-09-16T05:06:26.997638171Z" level=info msg="shim disconnected" id=28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127 namespace=k8s.io Sep 16 05:06:26.998231 containerd[1578]: time="2025-09-16T05:06:26.997713801Z" level=warning msg="cleaning up after shim disconnected" id=28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127 namespace=k8s.io Sep 16 05:06:26.998231 containerd[1578]: time="2025-09-16T05:06:26.997795855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 05:06:27.014977 containerd[1578]: time="2025-09-16T05:06:27.014821828Z" level=info msg="received exit event sandbox_id:\"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" exit_status:137 exited_at:{seconds:1757999186 nanos:932509635}" Sep 16 05:06:27.015450 containerd[1578]: time="2025-09-16T05:06:27.015244138Z" level=info msg="TearDown network for sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" successfully" Sep 16 05:06:27.015450 containerd[1578]: time="2025-09-16T05:06:27.015276631Z" level=info msg="StopPodSandbox for \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" returns successfully" Sep 16 05:06:27.049459 kubelet[2882]: I0916 05:06:27.049387 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4233acd-877b-4699-b5e3-dfcc2c6cd533-cilium-config-path\") pod \"f4233acd-877b-4699-b5e3-dfcc2c6cd533\" (UID: \"f4233acd-877b-4699-b5e3-dfcc2c6cd533\") " Sep 16 05:06:27.049459 kubelet[2882]: I0916 05:06:27.049450 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk967\" (UniqueName: \"kubernetes.io/projected/f4233acd-877b-4699-b5e3-dfcc2c6cd533-kube-api-access-bk967\") pod \"f4233acd-877b-4699-b5e3-dfcc2c6cd533\" (UID: \"f4233acd-877b-4699-b5e3-dfcc2c6cd533\") " Sep 16 05:06:27.054937 kubelet[2882]: I0916 05:06:27.054732 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4233acd-877b-4699-b5e3-dfcc2c6cd533-kube-api-access-bk967" (OuterVolumeSpecName: "kube-api-access-bk967") pod "f4233acd-877b-4699-b5e3-dfcc2c6cd533" (UID: "f4233acd-877b-4699-b5e3-dfcc2c6cd533"). InnerVolumeSpecName "kube-api-access-bk967". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:06:27.054937 kubelet[2882]: I0916 05:06:27.054890 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4233acd-877b-4699-b5e3-dfcc2c6cd533-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4233acd-877b-4699-b5e3-dfcc2c6cd533" (UID: "f4233acd-877b-4699-b5e3-dfcc2c6cd533"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 05:06:27.149908 kubelet[2882]: I0916 05:06:27.149819 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-run\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.149908 kubelet[2882]: I0916 05:06:27.149907 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-host-proc-sys-net\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150218 kubelet[2882]: I0916 05:06:27.149933 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-lib-modules\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150218 kubelet[2882]: I0916 05:06:27.149994 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-config-path\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150218 kubelet[2882]: I0916 05:06:27.150020 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-hubble-tls\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150218 kubelet[2882]: I0916 05:06:27.150041 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-cgroup\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150218 kubelet[2882]: I0916 05:06:27.150067 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cni-path\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150218 kubelet[2882]: I0916 05:06:27.150127 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-etc-cni-netd\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150502 kubelet[2882]: I0916 05:06:27.150151 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-xtables-lock\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150502 kubelet[2882]: I0916 05:06:27.150181 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-clustermesh-secrets\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150502 kubelet[2882]: I0916 05:06:27.150215 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpfdd\" (UniqueName: \"kubernetes.io/projected/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-kube-api-access-mpfdd\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150502 kubelet[2882]: I0916 05:06:27.150240 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-hostproc\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150502 kubelet[2882]: I0916 05:06:27.150269 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-bpf-maps\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150502 kubelet[2882]: I0916 05:06:27.150295 2882 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-host-proc-sys-kernel\") pod \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\" (UID: \"bcad221e-22fb-49de-9b2c-cfa0d1cc09c3\") " Sep 16 05:06:27.150774 kubelet[2882]: I0916 05:06:27.150362 2882 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4233acd-877b-4699-b5e3-dfcc2c6cd533-cilium-config-path\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.150774 kubelet[2882]: I0916 05:06:27.150382 2882 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bk967\" (UniqueName: \"kubernetes.io/projected/f4233acd-877b-4699-b5e3-dfcc2c6cd533-kube-api-access-bk967\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.150774 kubelet[2882]: I0916 05:06:27.150459 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:06:27.150774 kubelet[2882]: I0916 05:06:27.150507 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:06:27.150774 kubelet[2882]: I0916 05:06:27.150532 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:06:27.152270 kubelet[2882]: I0916 05:06:27.150555 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:06:27.152270 kubelet[2882]: I0916 05:06:27.150996 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:06:27.153963 kubelet[2882]: I0916 05:06:27.153864 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 05:06:27.156599 kubelet[2882]: I0916 05:06:27.156565 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:06:27.156777 kubelet[2882]: I0916 05:06:27.156753 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cni-path" (OuterVolumeSpecName: "cni-path") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:06:27.156912 kubelet[2882]: I0916 05:06:27.156892 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:06:27.157175 kubelet[2882]: I0916 05:06:27.157148 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:06:27.157844 kubelet[2882]: I0916 05:06:27.157814 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-hostproc" (OuterVolumeSpecName: "hostproc") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:06:27.160370 kubelet[2882]: I0916 05:06:27.160314 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:06:27.160765 kubelet[2882]: I0916 05:06:27.160726 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 05:06:27.162571 kubelet[2882]: I0916 05:06:27.162502 2882 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-kube-api-access-mpfdd" (OuterVolumeSpecName: "kube-api-access-mpfdd") pod "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" (UID: "bcad221e-22fb-49de-9b2c-cfa0d1cc09c3"). InnerVolumeSpecName "kube-api-access-mpfdd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:06:27.239632 systemd[1]: Removed slice kubepods-besteffort-podf4233acd_877b_4699_b5e3_dfcc2c6cd533.slice - libcontainer container kubepods-besteffort-podf4233acd_877b_4699_b5e3_dfcc2c6cd533.slice. Sep 16 05:06:27.244551 systemd[1]: Removed slice kubepods-burstable-podbcad221e_22fb_49de_9b2c_cfa0d1cc09c3.slice - libcontainer container kubepods-burstable-podbcad221e_22fb_49de_9b2c_cfa0d1cc09c3.slice. Sep 16 05:06:27.244731 systemd[1]: kubepods-burstable-podbcad221e_22fb_49de_9b2c_cfa0d1cc09c3.slice: Consumed 9.875s CPU time, 126.9M memory peak, 128K read from disk, 13.3M written to disk. Sep 16 05:06:27.251448 kubelet[2882]: I0916 05:06:27.251392 2882 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-run\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251448 kubelet[2882]: I0916 05:06:27.251433 2882 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-host-proc-sys-net\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251448 kubelet[2882]: I0916 05:06:27.251456 2882 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-lib-modules\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251693 kubelet[2882]: I0916 05:06:27.251472 2882 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-config-path\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251693 kubelet[2882]: I0916 05:06:27.251488 2882 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-hubble-tls\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251693 kubelet[2882]: I0916 05:06:27.251503 2882 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cilium-cgroup\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251693 kubelet[2882]: I0916 05:06:27.251517 2882 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-cni-path\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251693 kubelet[2882]: I0916 05:06:27.251534 2882 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-etc-cni-netd\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251693 kubelet[2882]: I0916 05:06:27.251548 2882 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-xtables-lock\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251693 kubelet[2882]: I0916 05:06:27.251564 2882 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-clustermesh-secrets\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251906 kubelet[2882]: I0916 05:06:27.251580 2882 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mpfdd\" (UniqueName: \"kubernetes.io/projected/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-kube-api-access-mpfdd\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251906 kubelet[2882]: I0916 05:06:27.251595 2882 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-hostproc\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251906 kubelet[2882]: I0916 05:06:27.251610 2882 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-bpf-maps\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.251906 kubelet[2882]: I0916 05:06:27.251627 2882 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3-host-proc-sys-kernel\") on node \"ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8\" DevicePath \"\"" Sep 16 05:06:27.390260 kubelet[2882]: E0916 05:06:27.390161 2882 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 05:06:27.721006 kubelet[2882]: I0916 05:06:27.720506 2882 scope.go:117] "RemoveContainer" containerID="5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac" Sep 16 05:06:27.730952 containerd[1578]: time="2025-09-16T05:06:27.730309603Z" level=info msg="RemoveContainer for \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\"" Sep 16 05:06:27.745573 containerd[1578]: time="2025-09-16T05:06:27.742909454Z" level=info msg="RemoveContainer for \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\" returns successfully" Sep 16 05:06:27.747527 kubelet[2882]: I0916 05:06:27.747496 2882 scope.go:117] "RemoveContainer" containerID="5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac" Sep 16 05:06:27.749883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127-shm.mount: Deactivated successfully. Sep 16 05:06:27.750516 systemd[1]: var-lib-kubelet-pods-bcad221e\x2d22fb\x2d49de\x2d9b2c\x2dcfa0d1cc09c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmpfdd.mount: Deactivated successfully. Sep 16 05:06:27.750634 systemd[1]: var-lib-kubelet-pods-f4233acd\x2d877b\x2d4699\x2db5e3\x2ddfcc2c6cd533-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbk967.mount: Deactivated successfully. Sep 16 05:06:27.750767 systemd[1]: var-lib-kubelet-pods-bcad221e\x2d22fb\x2d49de\x2d9b2c\x2dcfa0d1cc09c3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 16 05:06:27.750870 systemd[1]: var-lib-kubelet-pods-bcad221e\x2d22fb\x2d49de\x2d9b2c\x2dcfa0d1cc09c3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 16 05:06:27.758116 containerd[1578]: time="2025-09-16T05:06:27.755526362Z" level=error msg="ContainerStatus for \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\": not found" Sep 16 05:06:27.758248 kubelet[2882]: E0916 05:06:27.756046 2882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\": not found" containerID="5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac" Sep 16 05:06:27.758248 kubelet[2882]: I0916 05:06:27.756162 2882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac"} err="failed to get container status \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"5fa9a0f4b643add4751c703d223d9e4c1e45cb445f0ba825c00ef5506d81b2ac\": not found" Sep 16 05:06:27.758248 kubelet[2882]: I0916 05:06:27.756322 2882 scope.go:117] "RemoveContainer" containerID="5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4" Sep 16 05:06:27.761818 containerd[1578]: time="2025-09-16T05:06:27.761662839Z" level=info msg="RemoveContainer for \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\"" Sep 16 05:06:27.770527 containerd[1578]: time="2025-09-16T05:06:27.770458521Z" level=info msg="RemoveContainer for \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" returns successfully" Sep 16 05:06:27.770781 kubelet[2882]: I0916 05:06:27.770753 2882 scope.go:117] "RemoveContainer" containerID="9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4" Sep 16 05:06:27.774281 containerd[1578]: time="2025-09-16T05:06:27.774166155Z" level=info msg="RemoveContainer for \"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\"" Sep 16 05:06:27.782243 containerd[1578]: time="2025-09-16T05:06:27.782185000Z" level=info msg="RemoveContainer for \"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\" returns successfully" Sep 16 05:06:27.782487 kubelet[2882]: I0916 05:06:27.782463 2882 scope.go:117] "RemoveContainer" containerID="d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525" Sep 16 05:06:27.787585 containerd[1578]: time="2025-09-16T05:06:27.787519007Z" level=info msg="RemoveContainer for \"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\"" Sep 16 05:06:27.793821 containerd[1578]: time="2025-09-16T05:06:27.793772710Z" level=info msg="RemoveContainer for \"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\" returns successfully" Sep 16 05:06:27.794065 kubelet[2882]: I0916 05:06:27.794018 2882 scope.go:117] "RemoveContainer" containerID="49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b" Sep 16 05:06:27.797651 containerd[1578]: time="2025-09-16T05:06:27.797574477Z" level=info msg="RemoveContainer for \"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\"" Sep 16 05:06:27.803646 containerd[1578]: time="2025-09-16T05:06:27.803598580Z" level=info msg="RemoveContainer for \"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\" returns successfully" Sep 16 05:06:27.803888 kubelet[2882]: I0916 05:06:27.803857 2882 scope.go:117] "RemoveContainer" containerID="e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6" Sep 16 05:06:27.806220 containerd[1578]: time="2025-09-16T05:06:27.806155521Z" level=info msg="RemoveContainer for \"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\"" Sep 16 05:06:27.814124 containerd[1578]: time="2025-09-16T05:06:27.811727707Z" level=info msg="RemoveContainer for \"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\" returns successfully" Sep 16 05:06:27.815185 kubelet[2882]: I0916 05:06:27.815077 2882 scope.go:117] "RemoveContainer" containerID="5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4" Sep 16 05:06:27.815939 containerd[1578]: time="2025-09-16T05:06:27.815641914Z" level=error msg="ContainerStatus for \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\": not found" Sep 16 05:06:27.816431 kubelet[2882]: E0916 05:06:27.816373 2882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\": not found" containerID="5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4" Sep 16 05:06:27.816538 kubelet[2882]: I0916 05:06:27.816435 2882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4"} err="failed to get container status \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ab2615441aaab1f89c214588847723edd163d301b21249ef42cbd2417bf4cc4\": not found" Sep 16 05:06:27.816538 kubelet[2882]: I0916 05:06:27.816474 2882 scope.go:117] "RemoveContainer" containerID="9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4" Sep 16 05:06:27.816904 containerd[1578]: time="2025-09-16T05:06:27.816844130Z" level=error msg="ContainerStatus for \"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\": not found" Sep 16 05:06:27.817158 kubelet[2882]: E0916 05:06:27.817038 2882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\": not found" containerID="9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4" Sep 16 05:06:27.817235 kubelet[2882]: I0916 05:06:27.817136 2882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4"} err="failed to get container status \"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\": rpc error: code = NotFound desc = an error occurred when try to find container \"9be79a49b17e27d0254aa7c7b2e3734e62d5b187146b0908824a16108ca7afa4\": not found" Sep 16 05:06:27.817235 kubelet[2882]: I0916 05:06:27.817199 2882 scope.go:117] "RemoveContainer" containerID="d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525" Sep 16 05:06:27.818031 containerd[1578]: time="2025-09-16T05:06:27.817984859Z" level=error msg="ContainerStatus for \"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\": not found" Sep 16 05:06:27.818328 kubelet[2882]: E0916 05:06:27.818293 2882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\": not found" containerID="d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525" Sep 16 05:06:27.818430 kubelet[2882]: I0916 05:06:27.818381 2882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525"} err="failed to get container status \"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8c88171d0b9aa98ca01984bc9af531aa95264c2c7e4c0274dac117b7579a525\": not found" Sep 16 05:06:27.818485 kubelet[2882]: I0916 05:06:27.818441 2882 scope.go:117] "RemoveContainer" containerID="49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b" Sep 16 05:06:27.819026 containerd[1578]: time="2025-09-16T05:06:27.818921801Z" level=error msg="ContainerStatus for \"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\": not found" Sep 16 05:06:27.820111 kubelet[2882]: E0916 05:06:27.819734 2882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\": not found" containerID="49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b" Sep 16 05:06:27.820214 kubelet[2882]: I0916 05:06:27.820143 2882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b"} err="failed to get container status \"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"49934eab8517916e2f0989a087c45f9951d57e0d16d346660595c2236890cd9b\": not found" Sep 16 05:06:27.820214 kubelet[2882]: I0916 05:06:27.820176 2882 scope.go:117] "RemoveContainer" containerID="e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6" Sep 16 05:06:27.822413 containerd[1578]: time="2025-09-16T05:06:27.822281570Z" level=error msg="ContainerStatus for \"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\": not found" Sep 16 05:06:27.823218 kubelet[2882]: E0916 05:06:27.822846 2882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\": not found" containerID="e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6" Sep 16 05:06:27.823218 kubelet[2882]: I0916 05:06:27.822980 2882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6"} err="failed to get container status \"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"e045e8f49ba5e6e8186ae3339e59edb423cd242d8dd08ffadd2bd3e1ce6c94f6\": not found" Sep 16 05:06:28.633764 sshd[4460]: Connection closed by 139.178.68.195 port 34660 Sep 16 05:06:28.634923 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:28.641859 systemd-logind[1555]: Session 26 logged out. Waiting for processes to exit. Sep 16 05:06:28.643424 systemd[1]: sshd@25-10.128.0.3:22-139.178.68.195:34660.service: Deactivated successfully. Sep 16 05:06:28.646525 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 05:06:28.646864 systemd[1]: session-26.scope: Consumed 1.174s CPU time, 24M memory peak. Sep 16 05:06:28.649751 systemd-logind[1555]: Removed session 26. Sep 16 05:06:28.692975 systemd[1]: Started sshd@26-10.128.0.3:22-139.178.68.195:34666.service - OpenSSH per-connection server daemon (139.178.68.195:34666). Sep 16 05:06:28.840652 containerd[1578]: time="2025-09-16T05:06:28.840178356Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" id:\"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" pid:2991 exit_status:137 exited_at:{seconds:1757999186 nanos:807330583}" Sep 16 05:06:29.000780 sshd[4617]: Accepted publickey for core from 139.178.68.195 port 34666 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:29.002642 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:29.009592 systemd-logind[1555]: New session 27 of user core. Sep 16 05:06:29.019424 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 16 05:06:29.028281 ntpd[1695]: Deleting 10 lxc_health, [fe80::7cd8:41ff:fea1:a92%8]:123, stats: received=0, sent=0, dropped=0, active_time=80 secs Sep 16 05:06:29.028648 ntpd[1695]: 16 Sep 05:06:29 ntpd[1695]: Deleting 10 lxc_health, [fe80::7cd8:41ff:fea1:a92%8]:123, stats: received=0, sent=0, dropped=0, active_time=80 secs Sep 16 05:06:29.228127 kubelet[2882]: I0916 05:06:29.227327 2882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcad221e-22fb-49de-9b2c-cfa0d1cc09c3" path="/var/lib/kubelet/pods/bcad221e-22fb-49de-9b2c-cfa0d1cc09c3/volumes" Sep 16 05:06:29.229107 kubelet[2882]: I0916 05:06:29.229058 2882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4233acd-877b-4699-b5e3-dfcc2c6cd533" path="/var/lib/kubelet/pods/f4233acd-877b-4699-b5e3-dfcc2c6cd533/volumes" Sep 16 05:06:29.664244 kubelet[2882]: I0916 05:06:29.664180 2882 setters.go:618] "Node became not ready" node="ci-4459-0-0-nightly-20250915-2100-ca42cf693ef7e8252ba8" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-16T05:06:29Z","lastTransitionTime":"2025-09-16T05:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 16 05:06:30.277731 sshd[4620]: Connection closed by 139.178.68.195 port 34666 Sep 16 05:06:30.279014 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:30.292717 systemd[1]: sshd@26-10.128.0.3:22-139.178.68.195:34666.service: Deactivated successfully. Sep 16 05:06:30.299919 systemd[1]: session-27.scope: Deactivated successfully. Sep 16 05:06:30.301326 systemd[1]: session-27.scope: Consumed 1.019s CPU time, 24M memory peak. Sep 16 05:06:30.304582 systemd-logind[1555]: Session 27 logged out. Waiting for processes to exit. Sep 16 05:06:30.339798 systemd[1]: Created slice kubepods-burstable-podeb9962e5_52cb_42ba_bc53_4b8d05c2d177.slice - libcontainer container kubepods-burstable-podeb9962e5_52cb_42ba_bc53_4b8d05c2d177.slice. Sep 16 05:06:30.345866 systemd[1]: Started sshd@27-10.128.0.3:22-139.178.68.195:33138.service - OpenSSH per-connection server daemon (139.178.68.195:33138). Sep 16 05:06:30.351920 systemd-logind[1555]: Removed session 27. Sep 16 05:06:30.375366 kubelet[2882]: I0916 05:06:30.375316 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-xtables-lock\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.377892 kubelet[2882]: I0916 05:06:30.377241 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-cilium-config-path\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.377892 kubelet[2882]: I0916 05:06:30.377348 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-etc-cni-netd\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.377892 kubelet[2882]: I0916 05:06:30.377587 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-clustermesh-secrets\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.377892 kubelet[2882]: I0916 05:06:30.377677 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2qwj\" (UniqueName: \"kubernetes.io/projected/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-kube-api-access-c2qwj\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.377892 kubelet[2882]: I0916 05:06:30.377776 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-hostproc\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.378509 kubelet[2882]: I0916 05:06:30.377837 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-cilium-ipsec-secrets\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.378509 kubelet[2882]: I0916 05:06:30.378230 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-lib-modules\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.379152 kubelet[2882]: I0916 05:06:30.378683 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-bpf-maps\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.379152 kubelet[2882]: I0916 05:06:30.378755 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-host-proc-sys-net\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.379612 kubelet[2882]: I0916 05:06:30.379353 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-host-proc-sys-kernel\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.379612 kubelet[2882]: I0916 05:06:30.379429 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-hubble-tls\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.379612 kubelet[2882]: I0916 05:06:30.379466 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-cilium-cgroup\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.379612 kubelet[2882]: I0916 05:06:30.379496 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-cilium-run\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.379612 kubelet[2882]: I0916 05:06:30.379525 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb9962e5-52cb-42ba-bc53-4b8d05c2d177-cni-path\") pod \"cilium-9gtwj\" (UID: \"eb9962e5-52cb-42ba-bc53-4b8d05c2d177\") " pod="kube-system/cilium-9gtwj" Sep 16 05:06:30.663138 containerd[1578]: time="2025-09-16T05:06:30.662953946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9gtwj,Uid:eb9962e5-52cb-42ba-bc53-4b8d05c2d177,Namespace:kube-system,Attempt:0,}" Sep 16 05:06:30.695943 containerd[1578]: time="2025-09-16T05:06:30.695777480Z" level=info msg="connecting to shim 910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086" address="unix:///run/containerd/s/0dc81f33537a6174b7442157d6e23b56dde05c4439a8b1e12813072c9a92ce17" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:06:30.700213 sshd[4631]: Accepted publickey for core from 139.178.68.195 port 33138 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:30.703644 sshd-session[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:30.716171 systemd-logind[1555]: New session 28 of user core. Sep 16 05:06:30.723613 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 16 05:06:30.740375 systemd[1]: Started cri-containerd-910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086.scope - libcontainer container 910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086. Sep 16 05:06:30.780314 containerd[1578]: time="2025-09-16T05:06:30.780233424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9gtwj,Uid:eb9962e5-52cb-42ba-bc53-4b8d05c2d177,Namespace:kube-system,Attempt:0,} returns sandbox id \"910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086\"" Sep 16 05:06:30.789939 containerd[1578]: time="2025-09-16T05:06:30.789880866Z" level=info msg="CreateContainer within sandbox \"910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 05:06:30.799166 containerd[1578]: time="2025-09-16T05:06:30.799068572Z" level=info msg="Container 9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:06:30.806549 containerd[1578]: time="2025-09-16T05:06:30.806489367Z" level=info msg="CreateContainer within sandbox \"910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d\"" Sep 16 05:06:30.807617 containerd[1578]: time="2025-09-16T05:06:30.807582935Z" level=info msg="StartContainer for \"9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d\"" Sep 16 05:06:30.809834 containerd[1578]: time="2025-09-16T05:06:30.809792118Z" level=info msg="connecting to shim 9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d" address="unix:///run/containerd/s/0dc81f33537a6174b7442157d6e23b56dde05c4439a8b1e12813072c9a92ce17" protocol=ttrpc version=3 Sep 16 05:06:30.838359 systemd[1]: Started cri-containerd-9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d.scope - libcontainer container 9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d. Sep 16 05:06:30.887071 containerd[1578]: time="2025-09-16T05:06:30.886915114Z" level=info msg="StartContainer for \"9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d\" returns successfully" Sep 16 05:06:30.898853 systemd[1]: cri-containerd-9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d.scope: Deactivated successfully. Sep 16 05:06:30.904295 containerd[1578]: time="2025-09-16T05:06:30.904245922Z" level=info msg="received exit event container_id:\"9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d\" id:\"9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d\" pid:4698 exited_at:{seconds:1757999190 nanos:903763263}" Sep 16 05:06:30.904706 containerd[1578]: time="2025-09-16T05:06:30.904638698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d\" id:\"9429d7cf59e78fac96c1d9a2d8d58a1ddad667b602d01113c5b0251db8d80d3d\" pid:4698 exited_at:{seconds:1757999190 nanos:903763263}" Sep 16 05:06:30.912342 sshd[4669]: Connection closed by 139.178.68.195 port 33138 Sep 16 05:06:30.913625 sshd-session[4631]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:30.924015 systemd[1]: sshd@27-10.128.0.3:22-139.178.68.195:33138.service: Deactivated successfully. Sep 16 05:06:30.929616 systemd[1]: session-28.scope: Deactivated successfully. Sep 16 05:06:30.932174 systemd-logind[1555]: Session 28 logged out. Waiting for processes to exit. Sep 16 05:06:30.937478 systemd-logind[1555]: Removed session 28. Sep 16 05:06:30.971226 systemd[1]: Started sshd@28-10.128.0.3:22-139.178.68.195:33148.service - OpenSSH per-connection server daemon (139.178.68.195:33148). Sep 16 05:06:31.228260 kubelet[2882]: E0916 05:06:31.228157 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-shk9m" podUID="edb44945-76df-4ffb-b4c3-3b8b661ad727" Sep 16 05:06:31.290995 sshd[4736]: Accepted publickey for core from 139.178.68.195 port 33148 ssh2: RSA SHA256:RInjx+req76vKTvoLEt9bakTDpyH6hMWtCW0Wm3lmbI Sep 16 05:06:31.292868 sshd-session[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:06:31.301162 systemd-logind[1555]: New session 29 of user core. Sep 16 05:06:31.306565 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 16 05:06:31.768878 containerd[1578]: time="2025-09-16T05:06:31.768801603Z" level=info msg="CreateContainer within sandbox \"910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 05:06:31.785131 containerd[1578]: time="2025-09-16T05:06:31.782828801Z" level=info msg="Container d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:06:31.792209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount216489230.mount: Deactivated successfully. Sep 16 05:06:31.798395 containerd[1578]: time="2025-09-16T05:06:31.798269025Z" level=info msg="CreateContainer within sandbox \"910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62\"" Sep 16 05:06:31.801361 containerd[1578]: time="2025-09-16T05:06:31.801298822Z" level=info msg="StartContainer for \"d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62\"" Sep 16 05:06:31.802731 containerd[1578]: time="2025-09-16T05:06:31.802685798Z" level=info msg="connecting to shim d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62" address="unix:///run/containerd/s/0dc81f33537a6174b7442157d6e23b56dde05c4439a8b1e12813072c9a92ce17" protocol=ttrpc version=3 Sep 16 05:06:31.837464 systemd[1]: Started cri-containerd-d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62.scope - libcontainer container d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62. Sep 16 05:06:31.887571 containerd[1578]: time="2025-09-16T05:06:31.887483734Z" level=info msg="StartContainer for \"d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62\" returns successfully" Sep 16 05:06:31.894999 systemd[1]: cri-containerd-d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62.scope: Deactivated successfully. Sep 16 05:06:31.901417 containerd[1578]: time="2025-09-16T05:06:31.901360222Z" level=info msg="received exit event container_id:\"d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62\" id:\"d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62\" pid:4758 exited_at:{seconds:1757999191 nanos:899434730}" Sep 16 05:06:31.901864 containerd[1578]: time="2025-09-16T05:06:31.901703951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62\" id:\"d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62\" pid:4758 exited_at:{seconds:1757999191 nanos:899434730}" Sep 16 05:06:31.943557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5def5b0566e17279386259ea9ee81710ffaf81a9787a66fa0ad69611e069a62-rootfs.mount: Deactivated successfully. Sep 16 05:06:32.392328 kubelet[2882]: E0916 05:06:32.392257 2882 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 05:06:32.772671 containerd[1578]: time="2025-09-16T05:06:32.772624333Z" level=info msg="CreateContainer within sandbox \"910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 05:06:32.811103 containerd[1578]: time="2025-09-16T05:06:32.807263453Z" level=info msg="Container 676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:06:32.822839 containerd[1578]: time="2025-09-16T05:06:32.822782882Z" level=info msg="CreateContainer within sandbox \"910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec\"" Sep 16 05:06:32.823929 containerd[1578]: time="2025-09-16T05:06:32.823890599Z" level=info msg="StartContainer for \"676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec\"" Sep 16 05:06:32.826520 containerd[1578]: time="2025-09-16T05:06:32.826478887Z" level=info msg="connecting to shim 676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec" address="unix:///run/containerd/s/0dc81f33537a6174b7442157d6e23b56dde05c4439a8b1e12813072c9a92ce17" protocol=ttrpc version=3 Sep 16 05:06:32.876344 systemd[1]: Started cri-containerd-676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec.scope - libcontainer container 676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec. Sep 16 05:06:33.050599 containerd[1578]: time="2025-09-16T05:06:33.050268205Z" level=info msg="StartContainer for \"676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec\" returns successfully" Sep 16 05:06:33.054594 systemd[1]: cri-containerd-676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec.scope: Deactivated successfully. Sep 16 05:06:33.061224 containerd[1578]: time="2025-09-16T05:06:33.060178082Z" level=info msg="received exit event container_id:\"676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec\" id:\"676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec\" pid:4803 exited_at:{seconds:1757999193 nanos:59503443}" Sep 16 05:06:33.061224 containerd[1578]: time="2025-09-16T05:06:33.060514834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec\" id:\"676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec\" pid:4803 exited_at:{seconds:1757999193 nanos:59503443}" Sep 16 05:06:33.114675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-676989a49b7210f1bf05ffdbf2470a83b870f15b3ca1b566feee7089abbeb2ec-rootfs.mount: Deactivated successfully. Sep 16 05:06:33.222734 kubelet[2882]: E0916 05:06:33.222661 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-shk9m" podUID="edb44945-76df-4ffb-b4c3-3b8b661ad727" Sep 16 05:06:33.780153 containerd[1578]: time="2025-09-16T05:06:33.779523664Z" level=info msg="CreateContainer within sandbox \"910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 05:06:33.795124 containerd[1578]: time="2025-09-16T05:06:33.794949350Z" level=info msg="Container 390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:06:33.812930 containerd[1578]: time="2025-09-16T05:06:33.812852646Z" level=info msg="CreateContainer within sandbox \"910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88\"" Sep 16 05:06:33.814061 containerd[1578]: time="2025-09-16T05:06:33.813561188Z" level=info msg="StartContainer for \"390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88\"" Sep 16 05:06:33.816991 containerd[1578]: time="2025-09-16T05:06:33.816947067Z" level=info msg="connecting to shim 390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88" address="unix:///run/containerd/s/0dc81f33537a6174b7442157d6e23b56dde05c4439a8b1e12813072c9a92ce17" protocol=ttrpc version=3 Sep 16 05:06:33.860420 systemd[1]: Started cri-containerd-390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88.scope - libcontainer container 390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88. Sep 16 05:06:33.900529 systemd[1]: cri-containerd-390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88.scope: Deactivated successfully. Sep 16 05:06:33.904979 containerd[1578]: time="2025-09-16T05:06:33.904903139Z" level=info msg="received exit event container_id:\"390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88\" id:\"390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88\" pid:4842 exited_at:{seconds:1757999193 nanos:903233424}" Sep 16 05:06:33.905397 containerd[1578]: time="2025-09-16T05:06:33.905364322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88\" id:\"390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88\" pid:4842 exited_at:{seconds:1757999193 nanos:903233424}" Sep 16 05:06:33.905958 containerd[1578]: time="2025-09-16T05:06:33.905914048Z" level=info msg="StartContainer for \"390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88\" returns successfully" Sep 16 05:06:33.944214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-390ed8b1ed276850d5f9beee2d47a89ed157b72e2aca65bbc13011ec72d1da88-rootfs.mount: Deactivated successfully. Sep 16 05:06:34.790375 containerd[1578]: time="2025-09-16T05:06:34.790314252Z" level=info msg="CreateContainer within sandbox \"910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 05:06:34.816118 containerd[1578]: time="2025-09-16T05:06:34.814218756Z" level=info msg="Container 50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:06:34.821615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3810838506.mount: Deactivated successfully. Sep 16 05:06:34.839652 containerd[1578]: time="2025-09-16T05:06:34.839158908Z" level=info msg="CreateContainer within sandbox \"910b18e814f7feb334fc892f2a6777913f1881e4b79314e16ee4971705c3d086\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e\"" Sep 16 05:06:34.842435 containerd[1578]: time="2025-09-16T05:06:34.842391512Z" level=info msg="StartContainer for \"50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e\"" Sep 16 05:06:34.844012 containerd[1578]: time="2025-09-16T05:06:34.843971497Z" level=info msg="connecting to shim 50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e" address="unix:///run/containerd/s/0dc81f33537a6174b7442157d6e23b56dde05c4439a8b1e12813072c9a92ce17" protocol=ttrpc version=3 Sep 16 05:06:34.888373 systemd[1]: Started cri-containerd-50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e.scope - libcontainer container 50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e. Sep 16 05:06:34.951913 containerd[1578]: time="2025-09-16T05:06:34.951862338Z" level=info msg="StartContainer for \"50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e\" returns successfully" Sep 16 05:06:35.055164 containerd[1578]: time="2025-09-16T05:06:35.053974039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e\" id:\"639a9106d821f512b2cfdfae5a35c567986b73bc47c7af23f142891e7eb5d732\" pid:4910 exited_at:{seconds:1757999195 nanos:53276981}" Sep 16 05:06:35.228970 kubelet[2882]: E0916 05:06:35.228483 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-shk9m" podUID="edb44945-76df-4ffb-b4c3-3b8b661ad727" Sep 16 05:06:35.499180 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 16 05:06:35.812712 kubelet[2882]: I0916 05:06:35.811405 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9gtwj" podStartSLOduration=5.811380718 podStartE2EDuration="5.811380718s" podCreationTimestamp="2025-09-16 05:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:06:35.811063724 +0000 UTC m=+118.888616901" watchObservedRunningTime="2025-09-16 05:06:35.811380718 +0000 UTC m=+118.888933869" Sep 16 05:06:37.150209 containerd[1578]: time="2025-09-16T05:06:37.150159005Z" level=info msg="StopPodSandbox for \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\"" Sep 16 05:06:37.151859 containerd[1578]: time="2025-09-16T05:06:37.150355438Z" level=info msg="TearDown network for sandbox \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" successfully" Sep 16 05:06:37.151859 containerd[1578]: time="2025-09-16T05:06:37.150378740Z" level=info msg="StopPodSandbox for \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" returns successfully" Sep 16 05:06:37.151859 containerd[1578]: time="2025-09-16T05:06:37.150917759Z" level=info msg="RemovePodSandbox for \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\"" Sep 16 05:06:37.151859 containerd[1578]: time="2025-09-16T05:06:37.150955421Z" level=info msg="Forcibly stopping sandbox \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\"" Sep 16 05:06:37.151859 containerd[1578]: time="2025-09-16T05:06:37.151127757Z" level=info msg="TearDown network for sandbox \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" successfully" Sep 16 05:06:37.154274 containerd[1578]: time="2025-09-16T05:06:37.153361965Z" level=info msg="Ensure that sandbox e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7 in task-service has been cleanup successfully" Sep 16 05:06:37.160560 containerd[1578]: time="2025-09-16T05:06:37.160480038Z" level=info msg="RemovePodSandbox \"e8478078a20f7271450c4bedd70d0ae7f97176969159a71daa3a25b9fda924f7\" returns successfully" Sep 16 05:06:37.161646 containerd[1578]: time="2025-09-16T05:06:37.161580694Z" level=info msg="StopPodSandbox for \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\"" Sep 16 05:06:37.161820 containerd[1578]: time="2025-09-16T05:06:37.161772403Z" level=info msg="TearDown network for sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" successfully" Sep 16 05:06:37.161893 containerd[1578]: time="2025-09-16T05:06:37.161816137Z" level=info msg="StopPodSandbox for \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" returns successfully" Sep 16 05:06:37.162375 containerd[1578]: time="2025-09-16T05:06:37.162335483Z" level=info msg="RemovePodSandbox for \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\"" Sep 16 05:06:37.162516 containerd[1578]: time="2025-09-16T05:06:37.162489960Z" level=info msg="Forcibly stopping sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\"" Sep 16 05:06:37.162662 containerd[1578]: time="2025-09-16T05:06:37.162627576Z" level=info msg="TearDown network for sandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" successfully" Sep 16 05:06:37.165397 containerd[1578]: time="2025-09-16T05:06:37.165347842Z" level=info msg="Ensure that sandbox 28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127 in task-service has been cleanup successfully" Sep 16 05:06:37.171068 containerd[1578]: time="2025-09-16T05:06:37.170895539Z" level=info msg="RemovePodSandbox \"28cb9115742fcd8d49b75d600b2598bc35c6e02afb207b9cc8f2e151af1d2127\" returns successfully" Sep 16 05:06:37.229545 kubelet[2882]: E0916 05:06:37.229006 2882 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-shk9m" podUID="edb44945-76df-4ffb-b4c3-3b8b661ad727" Sep 16 05:06:37.872523 containerd[1578]: time="2025-09-16T05:06:37.872461700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e\" id:\"5c4b8ff97a61bd453f8a9154a62ea787725305de85dc3b918157a9468936cca8\" pid:5095 exit_status:1 exited_at:{seconds:1757999197 nanos:871473248}" Sep 16 05:06:38.938170 systemd-networkd[1466]: lxc_health: Link UP Sep 16 05:06:38.950840 systemd-networkd[1466]: lxc_health: Gained carrier Sep 16 05:06:40.159598 containerd[1578]: time="2025-09-16T05:06:40.159515413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e\" id:\"edaf18860a26decbe789c011889b12eeb7d6683139a643d38da60f2c4f8114e6\" pid:5436 exited_at:{seconds:1757999200 nanos:158473110}" Sep 16 05:06:40.764416 systemd-networkd[1466]: lxc_health: Gained IPv6LL Sep 16 05:06:42.498156 containerd[1578]: time="2025-09-16T05:06:42.498074273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e\" id:\"ff84ef3312bf3ac56d12278ab869d8e05d868eca992db6597810140d53756657\" pid:5467 exited_at:{seconds:1757999202 nanos:497663759}" Sep 16 05:06:43.028876 ntpd[1695]: Listen normally on 13 lxc_health [fe80::9ce7:20ff:fe22:763d%14]:123 Sep 16 05:06:43.029566 ntpd[1695]: 16 Sep 05:06:43 ntpd[1695]: Listen normally on 13 lxc_health [fe80::9ce7:20ff:fe22:763d%14]:123 Sep 16 05:06:44.723224 containerd[1578]: time="2025-09-16T05:06:44.723165314Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50d254b099dc7ed8f99422e98ecd85f9c193b6c8375c7dcff749437987d2f76e\" id:\"113a74f5879a89e3ca5a9d8d6b246adba74ad792d4f55f3ec01e95cb783d6fe5\" pid:5494 exited_at:{seconds:1757999204 nanos:721855515}" Sep 16 05:06:44.778131 sshd[4739]: Connection closed by 139.178.68.195 port 33148 Sep 16 05:06:44.779066 sshd-session[4736]: pam_unix(sshd:session): session closed for user core Sep 16 05:06:44.794375 systemd[1]: sshd@28-10.128.0.3:22-139.178.68.195:33148.service: Deactivated successfully. Sep 16 05:06:44.802939 systemd[1]: session-29.scope: Deactivated successfully. Sep 16 05:06:44.811718 systemd-logind[1555]: Session 29 logged out. Waiting for processes to exit. Sep 16 05:06:44.815808 systemd-logind[1555]: Removed session 29. Sep 16 05:06:44.946530 update_engine[1557]: I20250916 05:06:44.946412 1557 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 16 05:06:44.951177 update_engine[1557]: I20250916 05:06:44.947218 1557 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 16 05:06:44.951177 update_engine[1557]: I20250916 05:06:44.947502 1557 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 16 05:06:44.951177 update_engine[1557]: I20250916 05:06:44.949189 1557 omaha_request_params.cc:62] Current group set to developer Sep 16 05:06:44.951177 update_engine[1557]: I20250916 05:06:44.949396 1557 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 16 05:06:44.951177 update_engine[1557]: I20250916 05:06:44.949417 1557 update_attempter.cc:643] Scheduling an action processor start. Sep 16 05:06:44.951177 update_engine[1557]: I20250916 05:06:44.949446 1557 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 16 05:06:44.951177 update_engine[1557]: I20250916 05:06:44.949506 1557 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 16 05:06:44.951177 update_engine[1557]: I20250916 05:06:44.949604 1557 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 16 05:06:44.951606 update_engine[1557]: I20250916 05:06:44.949617 1557 omaha_request_action.cc:272] Request: Sep 16 05:06:44.951606 update_engine[1557]: Sep 16 05:06:44.951606 update_engine[1557]: Sep 16 05:06:44.951606 update_engine[1557]: Sep 16 05:06:44.951606 update_engine[1557]: Sep 16 05:06:44.951606 update_engine[1557]: Sep 16 05:06:44.951606 update_engine[1557]: Sep 16 05:06:44.951606 update_engine[1557]: Sep 16 05:06:44.951606 update_engine[1557]: Sep 16 05:06:44.951606 update_engine[1557]: I20250916 05:06:44.951282 1557 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 16 05:06:44.952983 update_engine[1557]: I20250916 05:06:44.952942 1557 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 16 05:06:44.954173 update_engine[1557]: I20250916 05:06:44.953995 1557 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 16 05:06:44.954737 locksmithd[1622]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 16 05:06:44.964811 update_engine[1557]: E20250916 05:06:44.964612 1557 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 16 05:06:44.964811 update_engine[1557]: I20250916 05:06:44.964749 1557 libcurl_http_fetcher.cc:283] No HTTP response, retry 1