Jul 15 23:46:56.592904 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 22:01:05 -00 2025 Jul 15 23:46:56.592949 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:46:56.592966 kernel: BIOS-provided physical RAM map: Jul 15 23:46:56.592979 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jul 15 23:46:56.592992 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jul 15 23:46:56.593004 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jul 15 23:46:56.593022 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jul 15 23:46:56.593034 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jul 15 23:46:56.593046 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd32afff] usable Jul 15 23:46:56.593058 kernel: BIOS-e820: [mem 0x00000000bd32b000-0x00000000bd332fff] ACPI data Jul 15 23:46:56.593072 kernel: BIOS-e820: [mem 0x00000000bd333000-0x00000000bf8ecfff] usable Jul 15 23:46:56.593085 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jul 15 23:46:56.593099 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jul 15 23:46:56.593113 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jul 15 23:46:56.593134 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jul 15 23:46:56.593150 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jul 15 23:46:56.593165 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jul 15 23:46:56.593181 kernel: NX (Execute Disable) protection: active Jul 15 23:46:56.593195 kernel: APIC: Static calls initialized Jul 15 23:46:56.593211 kernel: efi: EFI v2.7 by EDK II Jul 15 23:46:56.593226 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32b018 Jul 15 23:46:56.593242 kernel: random: crng init done Jul 15 23:46:56.593260 kernel: secureboot: Secure boot disabled Jul 15 23:46:56.593276 kernel: SMBIOS 2.4 present. Jul 15 23:46:56.593291 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Jul 15 23:46:56.593307 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:46:56.593322 kernel: Hypervisor detected: KVM Jul 15 23:46:56.593377 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 23:46:56.593396 kernel: kvm-clock: using sched offset of 14570970684 cycles Jul 15 23:46:56.593412 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 23:46:56.593427 kernel: tsc: Detected 2299.998 MHz processor Jul 15 23:46:56.593442 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 23:46:56.593463 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 23:46:56.593478 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jul 15 23:46:56.593493 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jul 15 23:46:56.593508 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 23:46:56.593523 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jul 15 23:46:56.593538 kernel: Using GB pages for direct mapping Jul 15 23:46:56.593554 kernel: ACPI: Early table checksum verification disabled Jul 15 23:46:56.593571 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jul 15 23:46:56.593598 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jul 15 23:46:56.593614 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jul 15 23:46:56.593630 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jul 15 23:46:56.593647 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jul 15 23:46:56.593664 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Jul 15 23:46:56.593682 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jul 15 23:46:56.593702 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jul 15 23:46:56.593718 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jul 15 23:46:56.593732 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jul 15 23:46:56.593747 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jul 15 23:46:56.593762 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jul 15 23:46:56.593778 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jul 15 23:46:56.593795 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jul 15 23:46:56.593810 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jul 15 23:46:56.593826 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jul 15 23:46:56.593880 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jul 15 23:46:56.593895 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jul 15 23:46:56.593910 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jul 15 23:46:56.593925 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jul 15 23:46:56.593940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 15 23:46:56.593957 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jul 15 23:46:56.593972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jul 15 23:46:56.593989 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Jul 15 23:46:56.594005 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Jul 15 23:46:56.594027 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Jul 15 23:46:56.594043 kernel: Zone ranges: Jul 15 23:46:56.594059 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 23:46:56.594076 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 15 23:46:56.594100 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jul 15 23:46:56.594117 kernel: Device empty Jul 15 23:46:56.594133 kernel: Movable zone start for each node Jul 15 23:46:56.594150 kernel: Early memory node ranges Jul 15 23:46:56.594167 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jul 15 23:46:56.594183 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jul 15 23:46:56.594204 kernel: node 0: [mem 0x0000000000100000-0x00000000bd32afff] Jul 15 23:46:56.594221 kernel: node 0: [mem 0x00000000bd333000-0x00000000bf8ecfff] Jul 15 23:46:56.594237 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jul 15 23:46:56.594254 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jul 15 23:46:56.594271 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jul 15 23:46:56.594287 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 23:46:56.594304 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jul 15 23:46:56.594321 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jul 15 23:46:56.594346 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jul 15 23:46:56.594367 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 15 23:46:56.594383 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jul 15 23:46:56.594400 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 15 23:46:56.594417 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 23:46:56.594434 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 23:46:56.594451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 23:46:56.594468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 23:46:56.594484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 23:46:56.594501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 23:46:56.594521 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 23:46:56.594538 kernel: CPU topo: Max. logical packages: 1 Jul 15 23:46:56.594555 kernel: CPU topo: Max. logical dies: 1 Jul 15 23:46:56.594572 kernel: CPU topo: Max. dies per package: 1 Jul 15 23:46:56.594589 kernel: CPU topo: Max. threads per core: 2 Jul 15 23:46:56.594606 kernel: CPU topo: Num. cores per package: 1 Jul 15 23:46:56.594623 kernel: CPU topo: Num. threads per package: 2 Jul 15 23:46:56.594640 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 23:46:56.594657 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 15 23:46:56.594680 kernel: Booting paravirtualized kernel on KVM Jul 15 23:46:56.594698 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 23:46:56.594713 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 23:46:56.594730 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 23:46:56.594747 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 23:46:56.594764 kernel: pcpu-alloc: [0] 0 1 Jul 15 23:46:56.594778 kernel: kvm-guest: PV spinlocks enabled Jul 15 23:46:56.594796 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 23:46:56.594814 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:46:56.596872 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:46:56.596908 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 15 23:46:56.596927 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:46:56.596944 kernel: Fallback order for Node 0: 0 Jul 15 23:46:56.596961 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Jul 15 23:46:56.596978 kernel: Policy zone: Normal Jul 15 23:46:56.596995 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:46:56.597012 kernel: software IO TLB: area num 2. Jul 15 23:46:56.597047 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 23:46:56.597065 kernel: Kernel/User page tables isolation: enabled Jul 15 23:46:56.597083 kernel: ftrace: allocating 40095 entries in 157 pages Jul 15 23:46:56.597105 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 23:46:56.597121 kernel: Dynamic Preempt: voluntary Jul 15 23:46:56.597139 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:46:56.597158 kernel: rcu: RCU event tracing is enabled. Jul 15 23:46:56.597177 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 23:46:56.597196 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:46:56.597217 kernel: Rude variant of Tasks RCU enabled. Jul 15 23:46:56.597235 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:46:56.597253 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:46:56.597271 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 23:46:56.597289 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:46:56.597307 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:46:56.597325 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:46:56.597351 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 15 23:46:56.597372 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:46:56.597390 kernel: Console: colour dummy device 80x25 Jul 15 23:46:56.597408 kernel: printk: legacy console [ttyS0] enabled Jul 15 23:46:56.597425 kernel: ACPI: Core revision 20240827 Jul 15 23:46:56.597443 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 23:46:56.597461 kernel: x2apic enabled Jul 15 23:46:56.597479 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 23:46:56.597497 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jul 15 23:46:56.597516 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 15 23:46:56.597539 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jul 15 23:46:56.597557 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jul 15 23:46:56.597575 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jul 15 23:46:56.597593 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 23:46:56.597610 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 15 23:46:56.597629 kernel: Spectre V2 : Mitigation: IBRS Jul 15 23:46:56.597646 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 23:46:56.597664 kernel: RETBleed: Mitigation: IBRS Jul 15 23:46:56.597682 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 23:46:56.597704 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jul 15 23:46:56.597721 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 23:46:56.597739 kernel: MDS: Mitigation: Clear CPU buffers Jul 15 23:46:56.597757 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 23:46:56.597775 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 15 23:46:56.597792 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 23:46:56.597810 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 23:46:56.597828 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 23:46:56.597861 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 23:46:56.597883 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 15 23:46:56.597900 kernel: Freeing SMP alternatives memory: 32K Jul 15 23:46:56.597918 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:46:56.597935 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:46:56.597953 kernel: landlock: Up and running. Jul 15 23:46:56.597971 kernel: SELinux: Initializing. Jul 15 23:46:56.597988 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 23:46:56.598006 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 23:46:56.598025 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jul 15 23:46:56.598046 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jul 15 23:46:56.598064 kernel: signal: max sigframe size: 1776 Jul 15 23:46:56.598082 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:46:56.598099 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:46:56.598117 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:46:56.598135 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 15 23:46:56.598153 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:46:56.598171 kernel: smpboot: x86: Booting SMP configuration: Jul 15 23:46:56.598192 kernel: .... node #0, CPUs: #1 Jul 15 23:46:56.598211 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 15 23:46:56.598230 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 15 23:46:56.598248 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 23:46:56.598266 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 15 23:46:56.598284 kernel: Memory: 7564016K/7860552K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 290704K reserved, 0K cma-reserved) Jul 15 23:46:56.598302 kernel: devtmpfs: initialized Jul 15 23:46:56.598320 kernel: x86/mm: Memory block size: 128MB Jul 15 23:46:56.598348 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jul 15 23:46:56.598370 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:46:56.598387 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 23:46:56.598405 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:46:56.598422 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:46:56.598440 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:46:56.598458 kernel: audit: type=2000 audit(1752623212.299:1): state=initialized audit_enabled=0 res=1 Jul 15 23:46:56.598475 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:46:56.598492 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 23:46:56.598509 kernel: cpuidle: using governor menu Jul 15 23:46:56.598531 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:46:56.598549 kernel: dca service started, version 1.12.1 Jul 15 23:46:56.598567 kernel: PCI: Using configuration type 1 for base access Jul 15 23:46:56.598585 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 23:46:56.598602 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:46:56.598620 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:46:56.598638 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:46:56.598656 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:46:56.598677 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:46:56.598695 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:46:56.598713 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:46:56.598730 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 15 23:46:56.598748 kernel: ACPI: Interpreter enabled Jul 15 23:46:56.598765 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 23:46:56.598783 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 23:46:56.598801 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 23:46:56.598819 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 15 23:46:56.598849 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 15 23:46:56.598871 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:46:56.599151 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:46:56.599344 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 15 23:46:56.599521 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 15 23:46:56.599542 kernel: PCI host bridge to bus 0000:00 Jul 15 23:46:56.599717 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 23:46:56.600958 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 23:46:56.601138 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 23:46:56.601298 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jul 15 23:46:56.601466 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:46:56.601674 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:46:56.601889 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jul 15 23:46:56.602082 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 15 23:46:56.602270 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 15 23:46:56.602466 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Jul 15 23:46:56.602650 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 15 23:46:56.602826 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Jul 15 23:46:56.604081 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 23:46:56.604270 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Jul 15 23:46:56.604466 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Jul 15 23:46:56.604664 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:46:56.605965 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Jul 15 23:46:56.606182 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Jul 15 23:46:56.606208 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 23:46:56.606227 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 23:46:56.606246 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 23:46:56.606270 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 23:46:56.606288 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 15 23:46:56.606307 kernel: iommu: Default domain type: Translated Jul 15 23:46:56.606326 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 23:46:56.606353 kernel: efivars: Registered efivars operations Jul 15 23:46:56.606372 kernel: PCI: Using ACPI for IRQ routing Jul 15 23:46:56.606391 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 23:46:56.606409 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jul 15 23:46:56.606428 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jul 15 23:46:56.606449 kernel: e820: reserve RAM buffer [mem 0xbd32b000-0xbfffffff] Jul 15 23:46:56.606467 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jul 15 23:46:56.606485 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jul 15 23:46:56.606503 kernel: vgaarb: loaded Jul 15 23:46:56.606522 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 23:46:56.606540 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:46:56.606558 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:46:56.606577 kernel: pnp: PnP ACPI init Jul 15 23:46:56.606595 kernel: pnp: PnP ACPI: found 7 devices Jul 15 23:46:56.606617 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 23:46:56.606636 kernel: NET: Registered PF_INET protocol family Jul 15 23:46:56.606655 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 15 23:46:56.606674 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 15 23:46:56.606693 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:46:56.606712 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:46:56.606730 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 15 23:46:56.606749 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 15 23:46:56.606767 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 15 23:46:56.606789 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 15 23:46:56.606808 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:46:56.606826 kernel: NET: Registered PF_XDP protocol family Jul 15 23:46:56.607016 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 23:46:56.607190 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 23:46:56.607364 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 23:46:56.607528 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jul 15 23:46:56.607720 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 15 23:46:56.607750 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:46:56.607770 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 15 23:46:56.607789 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jul 15 23:46:56.607808 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 15 23:46:56.607833 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 15 23:46:56.608726 kernel: clocksource: Switched to clocksource tsc Jul 15 23:46:56.608747 kernel: Initialise system trusted keyrings Jul 15 23:46:56.608766 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 15 23:46:56.608791 kernel: Key type asymmetric registered Jul 15 23:46:56.608810 kernel: Asymmetric key parser 'x509' registered Jul 15 23:46:56.608828 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 23:46:56.608864 kernel: io scheduler mq-deadline registered Jul 15 23:46:56.608883 kernel: io scheduler kyber registered Jul 15 23:46:56.608901 kernel: io scheduler bfq registered Jul 15 23:46:56.608918 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 23:46:56.608938 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 15 23:46:56.609162 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jul 15 23:46:56.609192 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 15 23:46:56.609388 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jul 15 23:46:56.609412 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 15 23:46:56.609598 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jul 15 23:46:56.609622 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:46:56.609641 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 23:46:56.609660 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 15 23:46:56.609678 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jul 15 23:46:56.609697 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jul 15 23:46:56.609916 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jul 15 23:46:56.609943 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 23:46:56.609962 kernel: i8042: Warning: Keylock active Jul 15 23:46:56.609980 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 23:46:56.609999 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 23:46:56.610204 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 15 23:46:56.610387 kernel: rtc_cmos 00:00: registered as rtc0 Jul 15 23:46:56.610561 kernel: rtc_cmos 00:00: setting system clock to 2025-07-15T23:46:55 UTC (1752623215) Jul 15 23:46:56.610731 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 15 23:46:56.610754 kernel: intel_pstate: CPU model not supported Jul 15 23:46:56.610773 kernel: pstore: Using crash dump compression: deflate Jul 15 23:46:56.610791 kernel: pstore: Registered efi_pstore as persistent store backend Jul 15 23:46:56.610811 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:46:56.610829 kernel: Segment Routing with IPv6 Jul 15 23:46:56.611538 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:46:56.611561 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:46:56.611586 kernel: Key type dns_resolver registered Jul 15 23:46:56.611604 kernel: IPI shorthand broadcast: enabled Jul 15 23:46:56.611623 kernel: sched_clock: Marking stable (3535003795, 144320747)->(3695044001, -15719459) Jul 15 23:46:56.611642 kernel: registered taskstats version 1 Jul 15 23:46:56.611661 kernel: Loading compiled-in X.509 certificates Jul 15 23:46:56.611679 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: cfc533be64675f3c66ee10d42aa8c5ce2115881d' Jul 15 23:46:56.611698 kernel: Demotion targets for Node 0: null Jul 15 23:46:56.611717 kernel: Key type .fscrypt registered Jul 15 23:46:56.611736 kernel: Key type fscrypt-provisioning registered Jul 15 23:46:56.611759 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:46:56.611778 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jul 15 23:46:56.611797 kernel: ima: No architecture policies found Jul 15 23:46:56.611815 kernel: clk: Disabling unused clocks Jul 15 23:46:56.611834 kernel: Warning: unable to open an initial console. Jul 15 23:46:56.611876 kernel: Freeing unused kernel image (initmem) memory: 54424K Jul 15 23:46:56.611894 kernel: Write protecting the kernel read-only data: 24576k Jul 15 23:46:56.611913 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 23:46:56.611935 kernel: Run /init as init process Jul 15 23:46:56.611954 kernel: with arguments: Jul 15 23:46:56.611972 kernel: /init Jul 15 23:46:56.611991 kernel: with environment: Jul 15 23:46:56.612009 kernel: HOME=/ Jul 15 23:46:56.612028 kernel: TERM=linux Jul 15 23:46:56.612046 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:46:56.612067 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:46:56.612091 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:46:56.612116 systemd[1]: Detected virtualization google. Jul 15 23:46:56.612135 systemd[1]: Detected architecture x86-64. Jul 15 23:46:56.612154 systemd[1]: Running in initrd. Jul 15 23:46:56.612173 systemd[1]: No hostname configured, using default hostname. Jul 15 23:46:56.612194 systemd[1]: Hostname set to . Jul 15 23:46:56.612213 systemd[1]: Initializing machine ID from random generator. Jul 15 23:46:56.612233 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:46:56.612257 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:46:56.612295 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:46:56.612320 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:46:56.612350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:46:56.612371 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:46:56.612398 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:46:56.612420 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:46:56.612441 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:46:56.612461 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:46:56.612482 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:46:56.612502 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:46:56.612523 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:46:56.612543 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:46:56.612567 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:46:56.612587 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:46:56.612608 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:46:56.612629 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:46:56.612649 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:46:56.612670 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:46:56.612691 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:46:56.612711 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:46:56.612732 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:46:56.612757 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:46:56.612778 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:46:56.612798 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:46:56.612819 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:46:56.612860 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:46:56.612881 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:46:56.612902 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:46:56.612960 systemd-journald[207]: Collecting audit messages is disabled. Jul 15 23:46:56.613011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:46:56.613032 systemd-journald[207]: Journal started Jul 15 23:46:56.613080 systemd-journald[207]: Runtime Journal (/run/log/journal/49f8cf9cb1f040c9a87fbeea5dda1ae1) is 8M, max 148.9M, 140.9M free. Jul 15 23:46:56.623836 systemd-modules-load[209]: Inserted module 'overlay' Jul 15 23:46:56.638989 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:46:56.648924 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:46:56.649355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:46:56.649604 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:46:56.656523 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:46:56.659313 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:46:56.687944 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:46:56.690808 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:46:56.772033 kernel: Bridge firewalling registered Jul 15 23:46:56.695542 systemd-modules-load[209]: Inserted module 'br_netfilter' Jul 15 23:46:56.698421 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:46:56.759503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:46:56.782314 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:46:56.789334 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:46:56.810442 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:46:56.826951 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:46:56.855882 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:46:56.899986 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:46:56.910024 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:46:56.914921 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:46:56.933231 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:46:56.955147 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:46:56.963428 systemd-resolved[234]: Positive Trust Anchors: Jul 15 23:46:56.963437 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:46:56.963480 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:46:56.967041 systemd-resolved[234]: Defaulting to hostname 'linux'. Jul 15 23:46:56.968242 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:46:57.041210 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:46:57.086017 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:46:57.170881 kernel: SCSI subsystem initialized Jul 15 23:46:57.187891 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:46:57.203884 kernel: iscsi: registered transport (tcp) Jul 15 23:46:57.237180 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:46:57.237261 kernel: QLogic iSCSI HBA Driver Jul 15 23:46:57.260085 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:46:57.298076 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:46:57.300110 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:46:57.374575 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:46:57.376474 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:46:57.464885 kernel: raid6: avx2x4 gen() 17802 MB/s Jul 15 23:46:57.485879 kernel: raid6: avx2x2 gen() 18020 MB/s Jul 15 23:46:57.511890 kernel: raid6: avx2x1 gen() 13896 MB/s Jul 15 23:46:57.511972 kernel: raid6: using algorithm avx2x2 gen() 18020 MB/s Jul 15 23:46:57.538958 kernel: raid6: .... xor() 18496 MB/s, rmw enabled Jul 15 23:46:57.539052 kernel: raid6: using avx2x2 recovery algorithm Jul 15 23:46:57.567871 kernel: xor: automatically using best checksumming function avx Jul 15 23:46:57.754887 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:46:57.763465 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:46:57.774141 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:46:57.807187 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jul 15 23:46:57.815818 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:46:57.837128 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:46:57.875249 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jul 15 23:46:57.906828 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:46:57.925891 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:46:58.039232 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:46:58.062024 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:46:58.163733 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 23:46:58.175889 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 23:46:58.190130 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Jul 15 23:46:58.190476 kernel: AES CTR mode by8 optimization enabled Jul 15 23:46:58.249065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:46:58.249273 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:46:58.310994 kernel: scsi host0: Virtio SCSI HBA Jul 15 23:46:58.311264 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jul 15 23:46:58.302347 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:46:58.339459 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jul 15 23:46:58.339777 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jul 15 23:46:58.340016 kernel: sd 0:0:1:0: [sda] Write Protect is off Jul 15 23:46:58.340237 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jul 15 23:46:58.340454 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 15 23:46:58.357174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:46:58.410037 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:46:58.410089 kernel: GPT:17805311 != 25165823 Jul 15 23:46:58.410114 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:46:58.410137 kernel: GPT:17805311 != 25165823 Jul 15 23:46:58.410158 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:46:58.410198 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:46:58.410221 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jul 15 23:46:58.392238 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:46:58.443912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:46:58.497709 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jul 15 23:46:58.510505 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jul 15 23:46:58.539929 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jul 15 23:46:58.549954 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jul 15 23:46:58.570309 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:46:58.606285 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 15 23:46:58.616606 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:46:58.634950 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:46:58.654985 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:46:58.673065 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:46:58.682185 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:46:58.719435 disk-uuid[605]: Primary Header is updated. Jul 15 23:46:58.719435 disk-uuid[605]: Secondary Entries is updated. Jul 15 23:46:58.719435 disk-uuid[605]: Secondary Header is updated. Jul 15 23:46:58.743184 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:46:58.735420 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:46:58.773882 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:46:59.796730 disk-uuid[606]: The operation has completed successfully. Jul 15 23:46:59.804005 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:46:59.881274 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:46:59.881428 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:46:59.925493 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:46:59.957558 sh[627]: Success Jul 15 23:46:59.995038 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:46:59.995159 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:46:59.995189 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:47:00.020873 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 15 23:47:00.108544 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:47:00.111962 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:47:00.148935 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:47:00.192716 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:47:00.192791 kernel: BTRFS: device fsid 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (639) Jul 15 23:47:00.211119 kernel: BTRFS info (device dm-0): first mount of filesystem 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e Jul 15 23:47:00.211213 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:47:00.211238 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:47:00.244015 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:47:00.244804 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:47:00.267137 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:47:00.268211 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:47:00.277093 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:47:00.337893 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (662) Jul 15 23:47:00.348192 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:47:00.361921 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:47:00.362002 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:47:00.380933 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:47:00.382440 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:47:00.394137 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:47:00.480126 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:47:00.512288 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:47:00.618042 systemd-networkd[809]: lo: Link UP Jul 15 23:47:00.618054 systemd-networkd[809]: lo: Gained carrier Jul 15 23:47:00.623492 systemd-networkd[809]: Enumeration completed Jul 15 23:47:00.624131 systemd-networkd[809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:47:00.624138 systemd-networkd[809]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:47:00.625272 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:47:00.627056 systemd-networkd[809]: eth0: Link UP Jul 15 23:47:00.627064 systemd-networkd[809]: eth0: Gained carrier Jul 15 23:47:00.627080 systemd-networkd[809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:47:00.642918 systemd-networkd[809]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9.c.flatcar-212911.internal' to 'ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9' Jul 15 23:47:00.642943 systemd-networkd[809]: eth0: DHCPv4 address 10.128.0.95/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 15 23:47:00.650079 systemd[1]: Reached target network.target - Network. Jul 15 23:47:00.655611 ignition[736]: Ignition 2.21.0 Jul 15 23:47:00.655628 ignition[736]: Stage: fetch-offline Jul 15 23:47:00.755471 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:47:00.655671 ignition[736]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:47:00.762036 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 23:47:00.655685 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:47:00.655812 ignition[736]: parsed url from cmdline: "" Jul 15 23:47:00.655819 ignition[736]: no config URL provided Jul 15 23:47:00.655828 ignition[736]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:47:00.655862 ignition[736]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:47:00.827092 unknown[819]: fetched base config from "system" Jul 15 23:47:00.655873 ignition[736]: failed to fetch config: resource requires networking Jul 15 23:47:00.827104 unknown[819]: fetched base config from "system" Jul 15 23:47:00.656227 ignition[736]: Ignition finished successfully Jul 15 23:47:00.827113 unknown[819]: fetched user config from "gcp" Jul 15 23:47:00.812234 ignition[819]: Ignition 2.21.0 Jul 15 23:47:00.830564 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 23:47:00.812243 ignition[819]: Stage: fetch Jul 15 23:47:00.841219 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:47:00.812438 ignition[819]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:47:00.895714 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:47:00.812452 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:47:00.915213 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:47:00.812564 ignition[819]: parsed url from cmdline: "" Jul 15 23:47:00.812569 ignition[819]: no config URL provided Jul 15 23:47:00.812575 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:47:00.812587 ignition[819]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:47:00.972986 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:47:00.812628 ignition[819]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jul 15 23:47:00.986297 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:47:00.817776 ignition[819]: GET result: OK Jul 15 23:47:01.003001 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:47:00.817954 ignition[819]: parsing config with SHA512: dc9410115b0f96f2d971fbc11ea1c3a0d30d66eb538616df516193f00028ae7404ff3bfea77637a06b9210e6985ed08df7b9cf76297b795a00f53ccd099fb220 Jul 15 23:47:01.020002 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:47:00.827611 ignition[819]: fetch: fetch complete Jul 15 23:47:01.038990 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:47:00.827617 ignition[819]: fetch: fetch passed Jul 15 23:47:01.056959 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:47:00.827778 ignition[819]: Ignition finished successfully Jul 15 23:47:01.071218 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:47:00.890914 ignition[826]: Ignition 2.21.0 Jul 15 23:47:00.890922 ignition[826]: Stage: kargs Jul 15 23:47:00.891105 ignition[826]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:47:00.891117 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:47:00.893536 ignition[826]: kargs: kargs passed Jul 15 23:47:00.893653 ignition[826]: Ignition finished successfully Jul 15 23:47:00.969785 ignition[833]: Ignition 2.21.0 Jul 15 23:47:00.969792 ignition[833]: Stage: disks Jul 15 23:47:00.970023 ignition[833]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:47:00.970036 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:47:00.971165 ignition[833]: disks: disks passed Jul 15 23:47:00.971222 ignition[833]: Ignition finished successfully Jul 15 23:47:01.148003 systemd-fsck[841]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 15 23:47:01.210730 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:47:01.232520 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:47:01.425880 kernel: EXT4-fs (sda9): mounted filesystem e7011b63-42ae-44ea-90bf-c826e39292b2 r/w with ordered data mode. Quota mode: none. Jul 15 23:47:01.426619 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:47:01.427468 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:47:01.442530 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:47:01.465545 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:47:01.478505 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:47:01.524461 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (849) Jul 15 23:47:01.524505 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:47:01.524530 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:47:01.524554 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:47:01.478570 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:47:01.478602 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:47:01.559252 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:47:01.573284 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:47:01.582374 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:47:01.707940 initrd-setup-root[873]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:47:01.721536 initrd-setup-root[880]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:47:01.731494 initrd-setup-root[887]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:47:01.740020 initrd-setup-root[894]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:47:01.762213 systemd-networkd[809]: eth0: Gained IPv6LL Jul 15 23:47:01.870436 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:47:01.872284 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:47:01.898951 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:47:01.931868 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:47:01.935030 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:47:01.971083 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:47:01.975746 ignition[961]: INFO : Ignition 2.21.0 Jul 15 23:47:01.975746 ignition[961]: INFO : Stage: mount Jul 15 23:47:02.007993 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:47:02.007993 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:47:02.007993 ignition[961]: INFO : mount: mount passed Jul 15 23:47:02.007993 ignition[961]: INFO : Ignition finished successfully Jul 15 23:47:01.986347 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:47:02.004203 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:47:02.062236 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:47:02.105885 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (974) Jul 15 23:47:02.123702 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:47:02.123759 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:47:02.123784 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:47:02.136965 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:47:02.184644 ignition[991]: INFO : Ignition 2.21.0 Jul 15 23:47:02.184644 ignition[991]: INFO : Stage: files Jul 15 23:47:02.197981 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:47:02.197981 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:47:02.197981 ignition[991]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:47:02.197981 ignition[991]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:47:02.197981 ignition[991]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:47:02.197981 ignition[991]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:47:02.197981 ignition[991]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:47:02.197981 ignition[991]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:47:02.195131 unknown[991]: wrote ssh authorized keys file for user: core Jul 15 23:47:02.289948 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 23:47:02.289948 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 15 23:47:02.595437 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:47:03.121510 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 23:47:03.137992 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:47:03.137992 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 23:47:03.516209 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 23:47:03.707585 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:47:03.707585 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 23:47:03.735988 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 15 23:47:04.098639 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 23:47:04.717107 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 23:47:04.717107 ignition[991]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 23:47:04.754025 ignition[991]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:47:04.754025 ignition[991]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:47:04.754025 ignition[991]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 23:47:04.754025 ignition[991]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:47:04.754025 ignition[991]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:47:04.754025 ignition[991]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:47:04.754025 ignition[991]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:47:04.754025 ignition[991]: INFO : files: files passed Jul 15 23:47:04.754025 ignition[991]: INFO : Ignition finished successfully Jul 15 23:47:04.724932 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:47:04.745365 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:47:04.755188 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:47:04.857684 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:47:04.949027 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:47:04.949027 initrd-setup-root-after-ignition[1020]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:47:04.857879 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:47:04.985085 initrd-setup-root-after-ignition[1024]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:47:04.883401 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:47:04.901343 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:47:04.920033 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:47:05.010802 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:47:05.010976 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:47:05.027610 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:47:05.046117 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:47:05.055222 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:47:05.056333 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:47:05.134781 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:47:05.136997 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:47:05.193301 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:47:05.212137 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:47:05.212466 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:47:05.240211 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:47:05.240411 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:47:05.266199 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:47:05.285164 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:47:05.303194 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:47:05.319287 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:47:05.338187 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:47:05.357223 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:47:05.376185 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:47:05.394115 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:47:05.413197 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:47:05.432167 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:47:05.450126 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:47:05.466120 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:47:05.466320 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:47:05.490185 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:47:05.508171 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:47:05.527070 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:47:05.527266 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:47:05.547185 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:47:05.547380 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:47:05.576171 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:47:05.576378 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:47:05.595153 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:47:05.595325 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:47:05.614272 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:47:05.667982 ignition[1045]: INFO : Ignition 2.21.0 Jul 15 23:47:05.667982 ignition[1045]: INFO : Stage: umount Jul 15 23:47:05.667982 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:47:05.667982 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:47:05.629955 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:47:05.733048 ignition[1045]: INFO : umount: umount passed Jul 15 23:47:05.733048 ignition[1045]: INFO : Ignition finished successfully Jul 15 23:47:05.674326 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:47:05.674655 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:47:05.688326 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:47:05.688499 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:47:05.728020 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:47:05.729466 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:47:05.729584 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:47:05.742628 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:47:05.742752 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:47:05.753066 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:47:05.753331 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:47:05.763789 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:47:05.763934 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:47:05.789156 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:47:05.789227 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:47:05.798178 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 23:47:05.798236 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 23:47:05.814192 systemd[1]: Stopped target network.target - Network. Jul 15 23:47:05.830160 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:47:05.830242 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:47:05.844284 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:47:05.861156 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:47:05.864923 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:47:05.884981 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:47:05.892143 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:47:05.906176 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:47:05.906255 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:47:05.922196 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:47:05.922271 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:47:05.939186 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:47:05.939273 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:47:05.955203 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:47:05.955269 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:47:05.971193 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:47:05.971270 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:47:05.987380 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:47:06.012129 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:47:06.028547 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:47:06.028691 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:47:06.049244 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:47:06.049503 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:47:06.049635 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:47:06.054526 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:47:06.056279 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:47:06.071198 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:47:06.071252 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:47:06.089292 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:47:06.112987 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:47:06.113089 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:47:06.123144 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:47:06.123209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:47:06.132298 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:47:06.132357 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:47:06.161225 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:47:06.599974 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 15 23:47:06.161329 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:47:06.178305 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:47:06.196297 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:47:06.196384 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:47:06.196958 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:47:06.197122 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:47:06.214590 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:47:06.214719 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:47:06.228028 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:47:06.228097 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:47:06.251127 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:47:06.251197 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:47:06.276217 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:47:06.276309 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:47:06.303090 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:47:06.303283 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:47:06.331201 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:47:06.347928 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:47:06.348040 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:47:06.366336 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:47:06.366408 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:47:06.395286 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 23:47:06.395355 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:47:06.415203 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:47:06.415269 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:47:06.434067 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:47:06.434156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:47:06.454765 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:47:06.454859 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 23:47:06.454915 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:47:06.454966 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:47:06.455479 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:47:06.455593 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:47:06.463430 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:47:06.463539 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:47:06.490140 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:47:06.508023 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:47:06.543521 systemd[1]: Switching root. Jul 15 23:47:06.968942 systemd-journald[207]: Journal stopped Jul 15 23:47:09.481305 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:47:09.481350 kernel: SELinux: policy capability open_perms=1 Jul 15 23:47:09.481364 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:47:09.481375 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:47:09.481386 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:47:09.481404 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:47:09.481420 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:47:09.481431 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:47:09.481443 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:47:09.481454 kernel: audit: type=1403 audit(1752623227.327:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:47:09.481469 systemd[1]: Successfully loaded SELinux policy in 93.595ms. Jul 15 23:47:09.481483 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.465ms. Jul 15 23:47:09.481497 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:47:09.481513 systemd[1]: Detected virtualization google. Jul 15 23:47:09.481527 systemd[1]: Detected architecture x86-64. Jul 15 23:47:09.481539 systemd[1]: Detected first boot. Jul 15 23:47:09.481552 systemd[1]: Initializing machine ID from random generator. Jul 15 23:47:09.481565 zram_generator::config[1087]: No configuration found. Jul 15 23:47:09.481583 kernel: Guest personality initialized and is inactive Jul 15 23:47:09.481595 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 23:47:09.481607 kernel: Initialized host personality Jul 15 23:47:09.481619 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:47:09.481632 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:47:09.481650 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:47:09.481664 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:47:09.481679 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:47:09.481692 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:47:09.481705 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:47:09.481718 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:47:09.481732 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:47:09.481926 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:47:09.481950 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:47:09.481979 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:47:09.482001 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:47:09.482021 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:47:09.482042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:47:09.482062 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:47:09.482082 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:47:09.482103 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:47:09.482124 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:47:09.482152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:47:09.482178 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 23:47:09.482200 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:47:09.482238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:47:09.482264 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:47:09.482286 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:47:09.482305 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:47:09.482328 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:47:09.482354 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:47:09.482377 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:47:09.482399 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:47:09.482434 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:47:09.482457 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:47:09.482479 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:47:09.482501 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:47:09.482529 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:47:09.482551 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:47:09.482574 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:47:09.482597 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:47:09.482619 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:47:09.482648 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:47:09.482673 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:47:09.482695 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:47:09.482716 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:47:09.482738 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:47:09.482762 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:47:09.482785 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:47:09.482806 systemd[1]: Reached target machines.target - Containers. Jul 15 23:47:09.482827 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:47:09.482876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:47:09.482900 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:47:09.482923 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:47:09.482945 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:47:09.482968 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:47:09.482991 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:47:09.483014 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:47:09.483043 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:47:09.483065 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:47:09.483091 kernel: fuse: init (API version 7.41) Jul 15 23:47:09.483112 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:47:09.483135 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:47:09.483157 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:47:09.483179 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:47:09.483203 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:47:09.483223 kernel: ACPI: bus type drm_connector registered Jul 15 23:47:09.483242 kernel: loop: module loaded Jul 15 23:47:09.483271 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:47:09.483292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:47:09.483313 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:47:09.483336 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:47:09.483359 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:47:09.483427 systemd-journald[1175]: Collecting audit messages is disabled. Jul 15 23:47:09.483483 systemd-journald[1175]: Journal started Jul 15 23:47:09.483525 systemd-journald[1175]: Runtime Journal (/run/log/journal/5a42dd7d961a41f7ab7f71e84ce4e2af) is 8M, max 148.9M, 140.9M free. Jul 15 23:47:08.262359 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:47:08.287522 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 23:47:08.288236 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:47:09.495864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:47:09.518509 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:47:09.518585 systemd[1]: Stopped verity-setup.service. Jul 15 23:47:09.542871 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:47:09.554881 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:47:09.564318 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:47:09.573134 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:47:09.582120 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:47:09.591102 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:47:09.600110 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:47:09.609169 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:47:09.618385 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:47:09.630308 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:47:09.641254 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:47:09.641531 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:47:09.652256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:47:09.652518 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:47:09.663253 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:47:09.663512 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:47:09.672221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:47:09.672480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:47:09.683242 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:47:09.683499 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:47:09.692222 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:47:09.692481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:47:09.701384 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:47:09.713300 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:47:09.725297 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:47:09.736285 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:47:09.746268 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:47:09.770147 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:47:09.780578 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:47:09.798953 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:47:09.807975 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:47:09.808166 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:47:09.818243 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:47:09.829211 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:47:09.838131 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:47:09.845010 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:47:09.857028 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:47:09.868134 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:47:09.872087 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:47:09.881020 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:47:09.883169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:47:09.894087 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:47:09.907639 systemd-journald[1175]: Time spent on flushing to /var/log/journal/5a42dd7d961a41f7ab7f71e84ce4e2af is 55.009ms for 958 entries. Jul 15 23:47:09.907639 systemd-journald[1175]: System Journal (/var/log/journal/5a42dd7d961a41f7ab7f71e84ce4e2af) is 8M, max 584.8M, 576.8M free. Jul 15 23:47:10.014809 systemd-journald[1175]: Received client request to flush runtime journal. Jul 15 23:47:10.016592 kernel: loop0: detected capacity change from 0 to 113872 Jul 15 23:47:09.919897 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:47:09.938119 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:47:09.948189 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:47:09.964423 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:47:09.980160 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:47:09.994984 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:47:10.009653 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:47:10.028989 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:47:10.057173 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:47:10.066965 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:47:10.068624 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:47:10.075267 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jul 15 23:47:10.075299 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jul 15 23:47:10.094238 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:47:10.094998 kernel: loop1: detected capacity change from 0 to 146240 Jul 15 23:47:10.111418 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:47:10.170903 kernel: loop2: detected capacity change from 0 to 52072 Jul 15 23:47:10.208594 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:47:10.222119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:47:10.240071 kernel: loop3: detected capacity change from 0 to 229808 Jul 15 23:47:10.291827 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jul 15 23:47:10.292361 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jul 15 23:47:10.308898 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:47:10.375202 kernel: loop4: detected capacity change from 0 to 113872 Jul 15 23:47:10.421037 kernel: loop5: detected capacity change from 0 to 146240 Jul 15 23:47:10.472874 kernel: loop6: detected capacity change from 0 to 52072 Jul 15 23:47:10.504900 kernel: loop7: detected capacity change from 0 to 229808 Jul 15 23:47:10.549210 (sd-merge)[1235]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jul 15 23:47:10.550362 (sd-merge)[1235]: Merged extensions into '/usr'. Jul 15 23:47:10.558816 systemd[1]: Reload requested from client PID 1210 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:47:10.559262 systemd[1]: Reloading... Jul 15 23:47:10.702918 zram_generator::config[1257]: No configuration found. Jul 15 23:47:10.970928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:47:11.027922 ldconfig[1205]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:47:11.181212 systemd[1]: Reloading finished in 620 ms. Jul 15 23:47:11.199634 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:47:11.210536 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:47:11.234050 systemd[1]: Starting ensure-sysext.service... Jul 15 23:47:11.242312 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:47:11.274595 systemd[1]: Reload requested from client PID 1301 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:47:11.274621 systemd[1]: Reloading... Jul 15 23:47:11.309825 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:47:11.310411 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:47:11.311438 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:47:11.311988 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:47:11.314425 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:47:11.317301 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Jul 15 23:47:11.317428 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Jul 15 23:47:11.328442 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:47:11.328466 systemd-tmpfiles[1302]: Skipping /boot Jul 15 23:47:11.362423 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:47:11.362610 systemd-tmpfiles[1302]: Skipping /boot Jul 15 23:47:11.446870 zram_generator::config[1329]: No configuration found. Jul 15 23:47:11.572640 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:47:11.685882 systemd[1]: Reloading finished in 410 ms. Jul 15 23:47:11.713652 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:47:11.735948 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:47:11.754630 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:47:11.767302 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:47:11.785330 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:47:11.799959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:47:11.813457 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:47:11.826933 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:47:11.848099 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:47:11.848647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:47:11.852600 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:47:11.865806 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:47:11.880163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:47:11.885627 augenrules[1399]: No rules Jul 15 23:47:11.890093 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:47:11.890554 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:47:11.899310 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:47:11.907935 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:47:11.911523 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:47:11.917518 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:47:11.927804 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:47:11.938689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:47:11.939786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:47:11.944928 systemd-udevd[1390]: Using default interface naming scheme 'v255'. Jul 15 23:47:11.951242 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:47:11.963894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:47:11.964417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:47:11.977643 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:47:11.978146 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:47:11.997940 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:47:12.008376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:47:12.054928 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:47:12.074940 systemd[1]: Finished ensure-sysext.service. Jul 15 23:47:12.086832 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:47:12.092854 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:47:12.101305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:47:12.109256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:47:12.124663 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:47:12.134944 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:47:12.154238 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:47:12.168829 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 23:47:12.177172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:47:12.177251 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:47:12.183211 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:47:12.193010 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:47:12.205197 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:47:12.205560 augenrules[1446]: /sbin/augenrules: No change Jul 15 23:47:12.213987 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:47:12.214046 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:47:12.215622 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:47:12.221043 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:47:12.234466 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:47:12.235555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:47:12.244379 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:47:12.244693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:47:12.255891 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:47:12.257575 augenrules[1475]: No rules Jul 15 23:47:12.257413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:47:12.266470 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:47:12.268023 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:47:12.281911 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:47:12.324104 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Jul 15 23:47:12.328268 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jul 15 23:47:12.337009 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:47:12.337132 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:47:12.338932 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 23:47:12.355770 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jul 15 23:47:12.380109 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 23:47:12.380884 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 23:47:12.472922 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jul 15 23:47:12.482993 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 15 23:47:12.491869 kernel: ACPI: button: Power Button [PWRF] Jul 15 23:47:12.501307 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 15 23:47:12.508809 kernel: ACPI: button: Sleep Button [SLPF] Jul 15 23:47:12.531434 systemd-resolved[1384]: Positive Trust Anchors: Jul 15 23:47:12.532253 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:47:12.532447 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:47:12.546477 systemd-resolved[1384]: Defaulting to hostname 'linux'. Jul 15 23:47:12.557716 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:47:12.598429 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 15 23:47:12.609082 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:47:12.619055 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:47:12.628417 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:47:12.641935 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:47:12.651978 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 23:47:12.662235 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:47:12.671185 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:47:12.682032 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:47:12.685202 systemd-networkd[1460]: lo: Link UP Jul 15 23:47:12.685216 systemd-networkd[1460]: lo: Gained carrier Jul 15 23:47:12.691987 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:47:12.692047 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:47:12.692452 systemd-networkd[1460]: Enumeration completed Jul 15 23:47:12.693069 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:47:12.693076 systemd-networkd[1460]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:47:12.694599 systemd-networkd[1460]: eth0: Link UP Jul 15 23:47:12.696070 systemd-networkd[1460]: eth0: Gained carrier Jul 15 23:47:12.696234 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:47:12.699974 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:47:12.719179 systemd-networkd[1460]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9.c.flatcar-212911.internal' to 'ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9' Jul 15 23:47:12.721687 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 15 23:47:12.719912 systemd-networkd[1460]: eth0: DHCPv4 address 10.128.0.95/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 15 23:47:12.721096 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:47:12.735475 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:47:12.748521 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:47:12.761211 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:47:12.770984 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:47:12.791771 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:47:12.801720 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:47:12.815125 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:47:12.825814 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:47:12.837869 kernel: EDAC MC: Ver: 3.0.0 Jul 15 23:47:12.840272 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:47:12.903943 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:47:12.932155 systemd[1]: Reached target network.target - Network. Jul 15 23:47:12.940115 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:47:12.949140 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:47:12.957152 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:47:12.957326 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:47:12.961345 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:47:12.975610 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 23:47:12.987147 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:47:12.997475 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:47:13.012137 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:47:13.024256 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:47:13.032948 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:47:13.034856 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 23:47:13.065254 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:47:13.078283 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 23:47:13.090068 jq[1531]: false Jul 15 23:47:13.089986 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:47:13.103285 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:47:13.108423 extend-filesystems[1532]: Found /dev/sda6 Jul 15 23:47:13.109419 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:47:13.133705 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:47:13.136729 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing passwd entry cache Jul 15 23:47:13.136540 oslogin_cache_refresh[1533]: Refreshing passwd entry cache Jul 15 23:47:13.139504 extend-filesystems[1532]: Found /dev/sda9 Jul 15 23:47:13.148815 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:47:13.159096 extend-filesystems[1532]: Checking size of /dev/sda9 Jul 15 23:47:13.172098 oslogin_cache_refresh[1533]: Failure getting users, quitting Jul 15 23:47:13.173508 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting users, quitting Jul 15 23:47:13.173508 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:47:13.173508 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing group entry cache Jul 15 23:47:13.166169 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:47:13.172126 oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:47:13.172202 oslogin_cache_refresh[1533]: Refreshing group entry cache Jul 15 23:47:13.178298 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting groups, quitting Jul 15 23:47:13.178408 oslogin_cache_refresh[1533]: Failure getting groups, quitting Jul 15 23:47:13.178563 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:47:13.178435 oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:47:13.186428 coreos-metadata[1528]: Jul 15 23:47:13.186 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jul 15 23:47:13.194722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:47:13.196524 coreos-metadata[1528]: Jul 15 23:47:13.195 INFO Fetch successful Jul 15 23:47:13.196524 coreos-metadata[1528]: Jul 15 23:47:13.195 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jul 15 23:47:13.196524 coreos-metadata[1528]: Jul 15 23:47:13.195 INFO Fetch successful Jul 15 23:47:13.196524 coreos-metadata[1528]: Jul 15 23:47:13.195 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jul 15 23:47:13.196524 coreos-metadata[1528]: Jul 15 23:47:13.195 INFO Fetch successful Jul 15 23:47:13.196524 coreos-metadata[1528]: Jul 15 23:47:13.195 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jul 15 23:47:13.196524 coreos-metadata[1528]: Jul 15 23:47:13.195 INFO Fetch successful Jul 15 23:47:13.205361 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jul 15 23:47:13.208821 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:47:13.215077 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:47:13.217157 extend-filesystems[1532]: Resized partition /dev/sda9 Jul 15 23:47:13.232990 extend-filesystems[1565]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:47:13.256247 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jul 15 23:47:13.227516 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:47:13.268267 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jul 15 23:47:13.282895 extend-filesystems[1565]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 15 23:47:13.282895 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 2 Jul 15 23:47:13.282895 extend-filesystems[1565]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jul 15 23:47:13.289177 ntpd[1535]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:22 UTC 2025 (1): Starting Jul 15 23:47:13.314653 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:22 UTC 2025 (1): Starting Jul 15 23:47:13.314653 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:47:13.314653 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: ---------------------------------------------------- Jul 15 23:47:13.314653 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:47:13.314653 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:47:13.314653 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: corporation. Support and training for ntp-4 are Jul 15 23:47:13.314653 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: available at https://www.nwtime.org/support Jul 15 23:47:13.314653 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: ---------------------------------------------------- Jul 15 23:47:13.314653 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: proto: precision = 0.092 usec (-23) Jul 15 23:47:13.314653 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: basedate set to 2025-07-03 Jul 15 23:47:13.314653 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: gps base set to 2025-07-06 (week 2374) Jul 15 23:47:13.289420 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:47:13.289476 ntpd[1535]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:47:13.313724 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:47:13.289492 ntpd[1535]: ---------------------------------------------------- Jul 15 23:47:13.289505 ntpd[1535]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:47:13.326125 jq[1567]: true Jul 15 23:47:13.315952 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:47:13.289518 ntpd[1535]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:47:13.316767 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 23:47:13.289531 ntpd[1535]: corporation. Support and training for ntp-4 are Jul 15 23:47:13.317174 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 23:47:13.289544 ntpd[1535]: available at https://www.nwtime.org/support Jul 15 23:47:13.289556 ntpd[1535]: ---------------------------------------------------- Jul 15 23:47:13.297396 ntpd[1535]: proto: precision = 0.092 usec (-23) Jul 15 23:47:13.298687 ntpd[1535]: basedate set to 2025-07-03 Jul 15 23:47:13.298714 ntpd[1535]: gps base set to 2025-07-06 (week 2374) Jul 15 23:47:13.333190 extend-filesystems[1532]: Resized filesystem in /dev/sda9 Jul 15 23:47:13.328160 ntpd[1535]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:47:13.368135 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:47:13.368135 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:47:13.368135 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:47:13.368135 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: Listen normally on 3 eth0 10.128.0.95:123 Jul 15 23:47:13.368135 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: Listen normally on 4 lo [::1]:123 Jul 15 23:47:13.368135 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: bind(21) AF_INET6 fe80::4001:aff:fe80:5f%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:47:13.368135 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:5f%2#123 Jul 15 23:47:13.368135 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: failed to init interface for address fe80::4001:aff:fe80:5f%2 Jul 15 23:47:13.368135 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: Listening on routing socket on fd #21 for interface updates Jul 15 23:47:13.368135 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:47:13.368135 ntpd[1535]: 15 Jul 23:47:13 ntpd[1535]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:47:13.335239 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:47:13.328228 ntpd[1535]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:47:13.336935 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:47:13.330885 ntpd[1535]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:47:13.347520 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:47:13.330959 ntpd[1535]: Listen normally on 3 eth0 10.128.0.95:123 Jul 15 23:47:13.348101 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:47:13.331035 ntpd[1535]: Listen normally on 4 lo [::1]:123 Jul 15 23:47:13.361817 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:47:13.331109 ntpd[1535]: bind(21) AF_INET6 fe80::4001:aff:fe80:5f%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:47:13.363951 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:47:13.331145 ntpd[1535]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:5f%2#123 Jul 15 23:47:13.331170 ntpd[1535]: failed to init interface for address fe80::4001:aff:fe80:5f%2 Jul 15 23:47:13.331220 ntpd[1535]: Listening on routing socket on fd #21 for interface updates Jul 15 23:47:13.345103 ntpd[1535]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:47:13.345147 ntpd[1535]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:47:13.395996 update_engine[1563]: I20250715 23:47:13.384804 1563 main.cc:92] Flatcar Update Engine starting Jul 15 23:47:13.432787 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 23:47:13.446926 systemd-logind[1545]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 23:47:13.446984 systemd-logind[1545]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 15 23:47:13.447022 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 23:47:13.450084 systemd-logind[1545]: New seat seat0. Jul 15 23:47:13.463632 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:47:13.501851 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:47:13.503667 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:47:13.513405 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:47:13.570874 jq[1577]: true Jul 15 23:47:13.619501 tar[1575]: linux-amd64/LICENSE Jul 15 23:47:13.619501 tar[1575]: linux-amd64/helm Jul 15 23:47:13.631300 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:47:13.720161 bash[1615]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:47:13.719956 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:47:13.730550 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:47:13.746651 systemd[1]: Starting sshkeys.service... Jul 15 23:47:13.821552 dbus-daemon[1529]: [system] SELinux support is enabled Jul 15 23:47:13.824277 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 23:47:13.831611 dbus-daemon[1529]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1460 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 23:47:13.837057 update_engine[1563]: I20250715 23:47:13.837002 1563 update_check_scheduler.cc:74] Next update check in 6m7s Jul 15 23:47:13.838966 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 23:47:13.849615 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:47:13.875365 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:47:13.875679 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:47:13.882637 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 23:47:13.886141 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:47:13.886363 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:47:13.897634 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:47:13.916942 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 23:47:13.939076 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:47:14.007125 coreos-metadata[1619]: Jul 15 23:47:14.005 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jul 15 23:47:14.007125 coreos-metadata[1619]: Jul 15 23:47:14.005 INFO Fetch failed with 404: resource not found Jul 15 23:47:14.007125 coreos-metadata[1619]: Jul 15 23:47:14.005 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jul 15 23:47:14.007125 coreos-metadata[1619]: Jul 15 23:47:14.005 INFO Fetch successful Jul 15 23:47:14.007125 coreos-metadata[1619]: Jul 15 23:47:14.005 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jul 15 23:47:14.010864 coreos-metadata[1619]: Jul 15 23:47:14.009 INFO Fetch failed with 404: resource not found Jul 15 23:47:14.010864 coreos-metadata[1619]: Jul 15 23:47:14.009 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jul 15 23:47:14.011087 coreos-metadata[1619]: Jul 15 23:47:14.011 INFO Fetch failed with 404: resource not found Jul 15 23:47:14.011087 coreos-metadata[1619]: Jul 15 23:47:14.011 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jul 15 23:47:14.011087 coreos-metadata[1619]: Jul 15 23:47:14.011 INFO Fetch successful Jul 15 23:47:14.022551 unknown[1619]: wrote ssh authorized keys file for user: core Jul 15 23:47:14.116633 systemd-networkd[1460]: eth0: Gained IPv6LL Jul 15 23:47:14.125250 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:47:14.125114 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 23:47:14.125704 update-ssh-keys[1629]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:47:14.136134 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:47:14.148787 systemd[1]: Finished sshkeys.service. Jul 15 23:47:14.166784 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:47:14.184910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:47:14.201181 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:47:14.214327 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jul 15 23:47:14.224248 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:47:14.244373 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:47:14.256465 systemd[1]: Started sshd@0-10.128.0.95:22-139.178.89.65:48614.service - OpenSSH per-connection server daemon (139.178.89.65:48614). Jul 15 23:47:14.271969 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:47:14.274831 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:47:14.279055 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 23:47:14.279956 dbus-daemon[1529]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1626 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 23:47:14.282923 locksmithd[1627]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:47:14.284878 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 23:47:14.310546 init.sh[1646]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jul 15 23:47:14.310546 init.sh[1646]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jul 15 23:47:14.311679 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 23:47:14.315210 init.sh[1646]: + /usr/bin/google_instance_setup Jul 15 23:47:14.331227 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:47:14.415759 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:47:14.426830 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:47:14.439880 containerd[1579]: time="2025-07-15T23:47:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:47:14.444765 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:47:14.448600 containerd[1579]: time="2025-07-15T23:47:14.448538130Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:47:14.458400 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 23:47:14.468294 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:47:14.525930 containerd[1579]: time="2025-07-15T23:47:14.525615462Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.314µs" Jul 15 23:47:14.525930 containerd[1579]: time="2025-07-15T23:47:14.525665498Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:47:14.525930 containerd[1579]: time="2025-07-15T23:47:14.525699014Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:47:14.526420 containerd[1579]: time="2025-07-15T23:47:14.526379269Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:47:14.526586 containerd[1579]: time="2025-07-15T23:47:14.526558978Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:47:14.526710 containerd[1579]: time="2025-07-15T23:47:14.526689594Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:47:14.528873 containerd[1579]: time="2025-07-15T23:47:14.528031247Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:47:14.528873 containerd[1579]: time="2025-07-15T23:47:14.528070029Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:47:14.528873 containerd[1579]: time="2025-07-15T23:47:14.528458383Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:47:14.528873 containerd[1579]: time="2025-07-15T23:47:14.528491419Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:47:14.528873 containerd[1579]: time="2025-07-15T23:47:14.528514927Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:47:14.528873 containerd[1579]: time="2025-07-15T23:47:14.528530645Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:47:14.528873 containerd[1579]: time="2025-07-15T23:47:14.528678609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:47:14.530297 containerd[1579]: time="2025-07-15T23:47:14.530253998Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:47:14.531827 containerd[1579]: time="2025-07-15T23:47:14.531790284Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:47:14.532128 containerd[1579]: time="2025-07-15T23:47:14.532094513Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:47:14.533036 containerd[1579]: time="2025-07-15T23:47:14.532957251Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:47:14.533679 containerd[1579]: time="2025-07-15T23:47:14.533643857Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:47:14.534644 containerd[1579]: time="2025-07-15T23:47:14.534612082Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:47:14.543784 containerd[1579]: time="2025-07-15T23:47:14.543670458Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:47:14.543784 containerd[1579]: time="2025-07-15T23:47:14.543744831Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.544822185Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.544915871Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.544940317Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.544962217Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.544990503Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.545014466Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.545037417Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.545057114Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.545074838Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.545098389Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.545279480Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.545309911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.545334328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:47:14.545459 containerd[1579]: time="2025-07-15T23:47:14.545357447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:47:14.546933 containerd[1579]: time="2025-07-15T23:47:14.545376963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:47:14.546933 containerd[1579]: time="2025-07-15T23:47:14.545397130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:47:14.548109 containerd[1579]: time="2025-07-15T23:47:14.547053436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:47:14.548109 containerd[1579]: time="2025-07-15T23:47:14.547097899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:47:14.548109 containerd[1579]: time="2025-07-15T23:47:14.547124821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:47:14.548109 containerd[1579]: time="2025-07-15T23:47:14.547148072Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:47:14.548109 containerd[1579]: time="2025-07-15T23:47:14.547169142Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:47:14.548109 containerd[1579]: time="2025-07-15T23:47:14.547753101Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:47:14.548109 containerd[1579]: time="2025-07-15T23:47:14.547788895Z" level=info msg="Start snapshots syncer" Jul 15 23:47:14.549300 containerd[1579]: time="2025-07-15T23:47:14.548508230Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:47:14.550566 containerd[1579]: time="2025-07-15T23:47:14.550502690Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.552626651Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.552778244Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.553007515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.553046915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.553079282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.553100590Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.553124640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.553144399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.553167519Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.553226858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.553249641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:47:14.553883 containerd[1579]: time="2025-07-15T23:47:14.553269478Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554463615Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554514656Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554535228Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554555763Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554572745Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554592261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554612789Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554644097Z" level=info msg="runtime interface created" Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554655467Z" level=info msg="created NRI interface" Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554683192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554705872Z" level=info msg="Connect containerd service" Jul 15 23:47:14.555819 containerd[1579]: time="2025-07-15T23:47:14.554767407Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:47:14.563993 containerd[1579]: time="2025-07-15T23:47:14.561355948Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:47:14.817927 polkitd[1658]: Started polkitd version 126 Jul 15 23:47:14.831682 polkitd[1658]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 23:47:14.836673 polkitd[1658]: Loading rules from directory /run/polkit-1/rules.d Jul 15 23:47:14.836903 polkitd[1658]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:47:14.837828 polkitd[1658]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 23:47:14.838439 polkitd[1658]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:47:14.838503 polkitd[1658]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 23:47:14.843822 polkitd[1658]: Finished loading, compiling and executing 2 rules Jul 15 23:47:14.845944 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 23:47:14.849085 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 23:47:14.852157 polkitd[1658]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 23:47:14.887637 sshd[1651]: Accepted publickey for core from 139.178.89.65 port 48614 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:47:14.899038 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:47:14.938108 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:47:14.951450 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:47:15.004638 systemd-logind[1545]: New session 1 of user core. Jul 15 23:47:15.005058 systemd-hostnamed[1626]: Hostname set to (transient) Jul 15 23:47:15.010479 systemd-resolved[1384]: System hostname changed to 'ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9'. Jul 15 23:47:15.028735 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:47:15.041896 containerd[1579]: time="2025-07-15T23:47:15.041614922Z" level=info msg="Start subscribing containerd event" Jul 15 23:47:15.043148 containerd[1579]: time="2025-07-15T23:47:15.043028546Z" level=info msg="Start recovering state" Jul 15 23:47:15.047078 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:47:15.048726 containerd[1579]: time="2025-07-15T23:47:15.047514561Z" level=info msg="Start event monitor" Jul 15 23:47:15.048975 containerd[1579]: time="2025-07-15T23:47:15.048951091Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:47:15.049926 containerd[1579]: time="2025-07-15T23:47:15.049358319Z" level=info msg="Start streaming server" Jul 15 23:47:15.051554 containerd[1579]: time="2025-07-15T23:47:15.051519911Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:47:15.057010 containerd[1579]: time="2025-07-15T23:47:15.054190707Z" level=info msg="runtime interface starting up..." Jul 15 23:47:15.057010 containerd[1579]: time="2025-07-15T23:47:15.054217477Z" level=info msg="starting plugins..." Jul 15 23:47:15.057010 containerd[1579]: time="2025-07-15T23:47:15.054244420Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:47:15.057010 containerd[1579]: time="2025-07-15T23:47:15.051035709Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:47:15.057010 containerd[1579]: time="2025-07-15T23:47:15.054440348Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:47:15.057010 containerd[1579]: time="2025-07-15T23:47:15.054506047Z" level=info msg="containerd successfully booted in 0.617302s" Jul 15 23:47:15.056731 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:47:15.096566 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:47:15.102592 systemd-logind[1545]: New session c1 of user core. Jul 15 23:47:15.484082 systemd[1694]: Queued start job for default target default.target. Jul 15 23:47:15.490232 systemd[1694]: Created slice app.slice - User Application Slice. Jul 15 23:47:15.490278 systemd[1694]: Reached target paths.target - Paths. Jul 15 23:47:15.490341 systemd[1694]: Reached target timers.target - Timers. Jul 15 23:47:15.494979 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:47:15.520575 tar[1575]: linux-amd64/README.md Jul 15 23:47:15.523013 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:47:15.524275 systemd[1694]: Reached target sockets.target - Sockets. Jul 15 23:47:15.524368 systemd[1694]: Reached target basic.target - Basic System. Jul 15 23:47:15.524450 systemd[1694]: Reached target default.target - Main User Target. Jul 15 23:47:15.524503 systemd[1694]: Startup finished in 405ms. Jul 15 23:47:15.524636 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:47:15.543112 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:47:15.572024 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:47:15.627186 instance-setup[1659]: INFO Running google_set_multiqueue. Jul 15 23:47:15.647658 instance-setup[1659]: INFO Set channels for eth0 to 2. Jul 15 23:47:15.652241 instance-setup[1659]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jul 15 23:47:15.653945 instance-setup[1659]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jul 15 23:47:15.654317 instance-setup[1659]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jul 15 23:47:15.656104 instance-setup[1659]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jul 15 23:47:15.656492 instance-setup[1659]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jul 15 23:47:15.658903 instance-setup[1659]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jul 15 23:47:15.658974 instance-setup[1659]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jul 15 23:47:15.660577 instance-setup[1659]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jul 15 23:47:15.669341 instance-setup[1659]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 15 23:47:15.673677 instance-setup[1659]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 15 23:47:15.675622 instance-setup[1659]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jul 15 23:47:15.675899 instance-setup[1659]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jul 15 23:47:15.699401 init.sh[1646]: + /usr/bin/google_metadata_script_runner --script-type startup Jul 15 23:47:15.792381 systemd[1]: Started sshd@1-10.128.0.95:22-139.178.89.65:48622.service - OpenSSH per-connection server daemon (139.178.89.65:48622). Jul 15 23:47:15.913428 startup-script[1737]: INFO Starting startup scripts. Jul 15 23:47:15.918832 startup-script[1737]: INFO No startup scripts found in metadata. Jul 15 23:47:15.918937 startup-script[1737]: INFO Finished running startup scripts. Jul 15 23:47:15.943981 init.sh[1646]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jul 15 23:47:15.943981 init.sh[1646]: + daemon_pids=() Jul 15 23:47:15.944198 init.sh[1646]: + for d in accounts clock_skew network Jul 15 23:47:15.944681 init.sh[1646]: + daemon_pids+=($!) Jul 15 23:47:15.944681 init.sh[1646]: + for d in accounts clock_skew network Jul 15 23:47:15.944807 init.sh[1743]: + /usr/bin/google_accounts_daemon Jul 15 23:47:15.945267 init.sh[1646]: + daemon_pids+=($!) Jul 15 23:47:15.945267 init.sh[1646]: + for d in accounts clock_skew network Jul 15 23:47:15.945267 init.sh[1646]: + daemon_pids+=($!) Jul 15 23:47:15.945267 init.sh[1646]: + NOTIFY_SOCKET=/run/systemd/notify Jul 15 23:47:15.945267 init.sh[1646]: + /usr/bin/systemd-notify --ready Jul 15 23:47:15.946774 init.sh[1744]: + /usr/bin/google_clock_skew_daemon Jul 15 23:47:15.947198 init.sh[1745]: + /usr/bin/google_network_daemon Jul 15 23:47:15.978483 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jul 15 23:47:15.989184 init.sh[1646]: + wait -n 1743 1744 1745 Jul 15 23:47:16.134898 sshd[1739]: Accepted publickey for core from 139.178.89.65 port 48622 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:47:16.137532 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:47:16.153467 systemd-logind[1545]: New session 2 of user core. Jul 15 23:47:16.157043 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:47:16.292179 ntpd[1535]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:5f%2]:123 Jul 15 23:47:16.292800 ntpd[1535]: 15 Jul 23:47:16 ntpd[1535]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:5f%2]:123 Jul 15 23:47:16.351087 google-networking[1745]: INFO Starting Google Networking daemon. Jul 15 23:47:16.364880 sshd[1747]: Connection closed by 139.178.89.65 port 48622 Jul 15 23:47:16.365614 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jul 15 23:47:16.379340 systemd[1]: sshd@1-10.128.0.95:22-139.178.89.65:48622.service: Deactivated successfully. Jul 15 23:47:16.385520 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:47:16.388699 systemd-logind[1545]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:47:16.390822 systemd-logind[1545]: Removed session 2. Jul 15 23:47:16.399263 google-clock-skew[1744]: INFO Starting Google Clock Skew daemon. Jul 15 23:47:16.407201 google-clock-skew[1744]: INFO Clock drift token has changed: 0. Jul 15 23:47:16.419967 systemd[1]: Started sshd@2-10.128.0.95:22-139.178.89.65:48630.service - OpenSSH per-connection server daemon (139.178.89.65:48630). Jul 15 23:47:16.477783 groupadd[1763]: group added to /etc/group: name=google-sudoers, GID=1000 Jul 15 23:47:16.481724 groupadd[1763]: group added to /etc/gshadow: name=google-sudoers Jul 15 23:47:16.529734 groupadd[1763]: new group: name=google-sudoers, GID=1000 Jul 15 23:47:16.559617 google-accounts[1743]: INFO Starting Google Accounts daemon. Jul 15 23:47:16.572448 google-accounts[1743]: WARNING OS Login not installed. Jul 15 23:47:16.574099 google-accounts[1743]: INFO Creating a new user account for 0. Jul 15 23:47:16.579290 init.sh[1772]: useradd: invalid user name '0': use --badname to ignore Jul 15 23:47:16.579815 google-accounts[1743]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jul 15 23:47:16.666467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:47:16.677720 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:47:16.681400 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:47:16.687188 systemd[1]: Startup finished in 3.746s (kernel) + 11.464s (initrd) + 9.450s (userspace) = 24.661s. Jul 15 23:47:16.746239 sshd[1762]: Accepted publickey for core from 139.178.89.65 port 48630 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:47:16.748357 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:47:16.755911 systemd-logind[1545]: New session 3 of user core. Jul 15 23:47:16.762044 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:47:17.000089 systemd-resolved[1384]: Clock change detected. Flushing caches. Jul 15 23:47:17.001498 google-clock-skew[1744]: INFO Synced system time with hardware clock. Jul 15 23:47:17.087239 sshd[1784]: Connection closed by 139.178.89.65 port 48630 Jul 15 23:47:17.089121 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jul 15 23:47:17.094616 systemd[1]: sshd@2-10.128.0.95:22-139.178.89.65:48630.service: Deactivated successfully. Jul 15 23:47:17.097746 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:47:17.099429 systemd-logind[1545]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:47:17.103015 systemd-logind[1545]: Removed session 3. Jul 15 23:47:17.656297 kubelet[1779]: E0715 23:47:17.656224 1779 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:47:17.659157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:47:17.659419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:47:17.660108 systemd[1]: kubelet.service: Consumed 1.284s CPU time, 269M memory peak. Jul 15 23:47:27.144792 systemd[1]: Started sshd@3-10.128.0.95:22-139.178.89.65:45596.service - OpenSSH per-connection server daemon (139.178.89.65:45596). Jul 15 23:47:27.457650 sshd[1796]: Accepted publickey for core from 139.178.89.65 port 45596 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:47:27.459318 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:47:27.466781 systemd-logind[1545]: New session 4 of user core. Jul 15 23:47:27.474680 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:47:27.668125 sshd[1798]: Connection closed by 139.178.89.65 port 45596 Jul 15 23:47:27.669050 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jul 15 23:47:27.674742 systemd[1]: sshd@3-10.128.0.95:22-139.178.89.65:45596.service: Deactivated successfully. Jul 15 23:47:27.677379 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:47:27.679282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:47:27.680326 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:47:27.683285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:47:27.685414 systemd-logind[1545]: Removed session 4. Jul 15 23:47:27.721019 systemd[1]: Started sshd@4-10.128.0.95:22-139.178.89.65:45600.service - OpenSSH per-connection server daemon (139.178.89.65:45600). Jul 15 23:47:28.027644 sshd[1807]: Accepted publickey for core from 139.178.89.65 port 45600 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:47:28.030751 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:47:28.039528 systemd-logind[1545]: New session 5 of user core. Jul 15 23:47:28.048681 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:47:28.053644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:47:28.066972 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:47:28.114229 kubelet[1814]: E0715 23:47:28.114170 1814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:47:28.118788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:47:28.119041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:47:28.119656 systemd[1]: kubelet.service: Consumed 203ms CPU time, 109.5M memory peak. Jul 15 23:47:28.237809 sshd[1815]: Connection closed by 139.178.89.65 port 45600 Jul 15 23:47:28.238665 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Jul 15 23:47:28.244268 systemd[1]: sshd@4-10.128.0.95:22-139.178.89.65:45600.service: Deactivated successfully. Jul 15 23:47:28.246758 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:47:28.247876 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:47:28.249744 systemd-logind[1545]: Removed session 5. Jul 15 23:47:28.295928 systemd[1]: Started sshd@5-10.128.0.95:22-139.178.89.65:45616.service - OpenSSH per-connection server daemon (139.178.89.65:45616). Jul 15 23:47:28.605867 sshd[1827]: Accepted publickey for core from 139.178.89.65 port 45616 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:47:28.607656 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:47:28.615057 systemd-logind[1545]: New session 6 of user core. Jul 15 23:47:28.620662 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:47:28.819427 sshd[1829]: Connection closed by 139.178.89.65 port 45616 Jul 15 23:47:28.820314 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jul 15 23:47:28.825834 systemd[1]: sshd@5-10.128.0.95:22-139.178.89.65:45616.service: Deactivated successfully. Jul 15 23:47:28.828239 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:47:28.829348 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:47:28.831283 systemd-logind[1545]: Removed session 6. Jul 15 23:47:28.884741 systemd[1]: Started sshd@6-10.128.0.95:22-139.178.89.65:45624.service - OpenSSH per-connection server daemon (139.178.89.65:45624). Jul 15 23:47:29.182844 sshd[1835]: Accepted publickey for core from 139.178.89.65 port 45624 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:47:29.184602 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:47:29.192062 systemd-logind[1545]: New session 7 of user core. Jul 15 23:47:29.197662 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:47:29.373971 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:47:29.374536 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:47:29.391101 sudo[1838]: pam_unix(sudo:session): session closed for user root Jul 15 23:47:29.433739 sshd[1837]: Connection closed by 139.178.89.65 port 45624 Jul 15 23:47:29.435017 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Jul 15 23:47:29.440784 systemd[1]: sshd@6-10.128.0.95:22-139.178.89.65:45624.service: Deactivated successfully. Jul 15 23:47:29.442994 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:47:29.444321 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:47:29.446503 systemd-logind[1545]: Removed session 7. Jul 15 23:47:29.489931 systemd[1]: Started sshd@7-10.128.0.95:22-139.178.89.65:33100.service - OpenSSH per-connection server daemon (139.178.89.65:33100). Jul 15 23:47:29.793166 sshd[1844]: Accepted publickey for core from 139.178.89.65 port 33100 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:47:29.794631 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:47:29.802019 systemd-logind[1545]: New session 8 of user core. Jul 15 23:47:29.809651 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:47:29.969235 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:47:29.969734 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:47:29.976061 sudo[1848]: pam_unix(sudo:session): session closed for user root Jul 15 23:47:29.988883 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:47:29.989343 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:47:30.001624 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:47:30.051769 augenrules[1870]: No rules Jul 15 23:47:30.052296 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:47:30.052663 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:47:30.054893 sudo[1847]: pam_unix(sudo:session): session closed for user root Jul 15 23:47:30.097259 sshd[1846]: Connection closed by 139.178.89.65 port 33100 Jul 15 23:47:30.098116 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Jul 15 23:47:30.103716 systemd[1]: sshd@7-10.128.0.95:22-139.178.89.65:33100.service: Deactivated successfully. Jul 15 23:47:30.106191 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:47:30.107424 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:47:30.109396 systemd-logind[1545]: Removed session 8. Jul 15 23:47:30.155800 systemd[1]: Started sshd@8-10.128.0.95:22-139.178.89.65:33104.service - OpenSSH per-connection server daemon (139.178.89.65:33104). Jul 15 23:47:30.460413 sshd[1879]: Accepted publickey for core from 139.178.89.65 port 33104 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:47:30.462638 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:47:30.469536 systemd-logind[1545]: New session 9 of user core. Jul 15 23:47:30.474657 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:47:30.639124 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:47:30.639633 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:47:31.123324 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:47:31.140039 (dockerd)[1900]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:47:31.450915 dockerd[1900]: time="2025-07-15T23:47:31.450742140Z" level=info msg="Starting up" Jul 15 23:47:31.451821 dockerd[1900]: time="2025-07-15T23:47:31.451782896Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:47:31.489797 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3556587045-merged.mount: Deactivated successfully. Jul 15 23:47:31.528436 dockerd[1900]: time="2025-07-15T23:47:31.528383931Z" level=info msg="Loading containers: start." Jul 15 23:47:31.546479 kernel: Initializing XFRM netlink socket Jul 15 23:47:31.862978 systemd-networkd[1460]: docker0: Link UP Jul 15 23:47:31.868308 dockerd[1900]: time="2025-07-15T23:47:31.868256585Z" level=info msg="Loading containers: done." Jul 15 23:47:31.883966 dockerd[1900]: time="2025-07-15T23:47:31.883909622Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:47:31.884167 dockerd[1900]: time="2025-07-15T23:47:31.883995708Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:47:31.884167 dockerd[1900]: time="2025-07-15T23:47:31.884128193Z" level=info msg="Initializing buildkit" Jul 15 23:47:31.913690 dockerd[1900]: time="2025-07-15T23:47:31.913624058Z" level=info msg="Completed buildkit initialization" Jul 15 23:47:31.923546 dockerd[1900]: time="2025-07-15T23:47:31.923471169Z" level=info msg="Daemon has completed initialization" Jul 15 23:47:31.923935 dockerd[1900]: time="2025-07-15T23:47:31.923730511Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:47:31.923772 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:47:32.733537 containerd[1579]: time="2025-07-15T23:47:32.733481130Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Jul 15 23:47:33.238906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount659145592.mount: Deactivated successfully. Jul 15 23:47:34.878719 containerd[1579]: time="2025-07-15T23:47:34.878649522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:34.880086 containerd[1579]: time="2025-07-15T23:47:34.880033027Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30084865" Jul 15 23:47:34.881330 containerd[1579]: time="2025-07-15T23:47:34.881258986Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:34.884593 containerd[1579]: time="2025-07-15T23:47:34.884516655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:34.885964 containerd[1579]: time="2025-07-15T23:47:34.885736496Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 2.152195173s" Jul 15 23:47:34.885964 containerd[1579]: time="2025-07-15T23:47:34.885782144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Jul 15 23:47:34.886740 containerd[1579]: time="2025-07-15T23:47:34.886548855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Jul 15 23:47:36.512034 containerd[1579]: time="2025-07-15T23:47:36.511963394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:36.513330 containerd[1579]: time="2025-07-15T23:47:36.513274782Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26021295" Jul 15 23:47:36.514793 containerd[1579]: time="2025-07-15T23:47:36.514727601Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:36.518053 containerd[1579]: time="2025-07-15T23:47:36.517983832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:36.520055 containerd[1579]: time="2025-07-15T23:47:36.519213763Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 1.632489994s" Jul 15 23:47:36.520055 containerd[1579]: time="2025-07-15T23:47:36.519258391Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Jul 15 23:47:36.520055 containerd[1579]: time="2025-07-15T23:47:36.519893027Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Jul 15 23:47:37.953828 containerd[1579]: time="2025-07-15T23:47:37.953757957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:37.955088 containerd[1579]: time="2025-07-15T23:47:37.955034858Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20156929" Jul 15 23:47:37.956560 containerd[1579]: time="2025-07-15T23:47:37.956493055Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:37.960225 containerd[1579]: time="2025-07-15T23:47:37.960177058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:37.965246 containerd[1579]: time="2025-07-15T23:47:37.965006040Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 1.445078902s" Jul 15 23:47:37.965246 containerd[1579]: time="2025-07-15T23:47:37.965058211Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Jul 15 23:47:37.966398 containerd[1579]: time="2025-07-15T23:47:37.966364459Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Jul 15 23:47:38.369527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 23:47:38.371983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:47:38.894932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:47:38.908422 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:47:38.983557 kubelet[2174]: E0715 23:47:38.983383 2174 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:47:38.988649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:47:38.988895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:47:38.989411 systemd[1]: kubelet.service: Consumed 238ms CPU time, 108.4M memory peak. Jul 15 23:47:39.437818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202579530.mount: Deactivated successfully. Jul 15 23:47:40.105403 containerd[1579]: time="2025-07-15T23:47:40.105340532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:40.106620 containerd[1579]: time="2025-07-15T23:47:40.106574744Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31894561" Jul 15 23:47:40.108109 containerd[1579]: time="2025-07-15T23:47:40.108030876Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:40.110884 containerd[1579]: time="2025-07-15T23:47:40.110783921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:40.113316 containerd[1579]: time="2025-07-15T23:47:40.112040919Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 2.145267104s" Jul 15 23:47:40.113316 containerd[1579]: time="2025-07-15T23:47:40.112092447Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Jul 15 23:47:40.113316 containerd[1579]: time="2025-07-15T23:47:40.113027127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 23:47:40.596281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3287148681.mount: Deactivated successfully. Jul 15 23:47:41.840256 containerd[1579]: time="2025-07-15T23:47:41.840186183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:41.841772 containerd[1579]: time="2025-07-15T23:47:41.841685521Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20948880" Jul 15 23:47:41.842595 containerd[1579]: time="2025-07-15T23:47:41.842527974Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:41.845956 containerd[1579]: time="2025-07-15T23:47:41.845891449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:41.847525 containerd[1579]: time="2025-07-15T23:47:41.847302580Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.734237361s" Jul 15 23:47:41.847525 containerd[1579]: time="2025-07-15T23:47:41.847347367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 15 23:47:41.848232 containerd[1579]: time="2025-07-15T23:47:41.848202333Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:47:42.305156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1723121062.mount: Deactivated successfully. Jul 15 23:47:42.311682 containerd[1579]: time="2025-07-15T23:47:42.311618220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:47:42.312728 containerd[1579]: time="2025-07-15T23:47:42.312676061Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jul 15 23:47:42.314188 containerd[1579]: time="2025-07-15T23:47:42.314103563Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:47:42.316871 containerd[1579]: time="2025-07-15T23:47:42.316797803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:47:42.318488 containerd[1579]: time="2025-07-15T23:47:42.318424930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 470.180134ms" Jul 15 23:47:42.318579 containerd[1579]: time="2025-07-15T23:47:42.318494140Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 23:47:42.319440 containerd[1579]: time="2025-07-15T23:47:42.319320603Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 23:47:42.752220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935713488.mount: Deactivated successfully. Jul 15 23:47:44.831566 containerd[1579]: time="2025-07-15T23:47:44.831504733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:44.833191 containerd[1579]: time="2025-07-15T23:47:44.833134122Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58251906" Jul 15 23:47:44.834464 containerd[1579]: time="2025-07-15T23:47:44.834378155Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:44.837838 containerd[1579]: time="2025-07-15T23:47:44.837749108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:47:44.839340 containerd[1579]: time="2025-07-15T23:47:44.839173243Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.519814039s" Jul 15 23:47:44.839340 containerd[1579]: time="2025-07-15T23:47:44.839227267Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 15 23:47:45.164818 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 23:47:48.306485 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:47:48.306815 systemd[1]: kubelet.service: Consumed 238ms CPU time, 108.4M memory peak. Jul 15 23:47:48.310257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:47:48.352101 systemd[1]: Reload requested from client PID 2329 ('systemctl') (unit session-9.scope)... Jul 15 23:47:48.352121 systemd[1]: Reloading... Jul 15 23:47:48.517542 zram_generator::config[2373]: No configuration found. Jul 15 23:47:48.664397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:47:48.851195 systemd[1]: Reloading finished in 498 ms. Jul 15 23:47:48.928021 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:47:48.928561 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:47:48.929044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:47:48.929115 systemd[1]: kubelet.service: Consumed 161ms CPU time, 98.3M memory peak. Jul 15 23:47:48.933277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:47:49.682305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:47:49.697003 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:47:49.750801 kubelet[2425]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:47:49.750801 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:47:49.750801 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:47:49.751363 kubelet[2425]: I0715 23:47:49.750859 2425 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:47:51.200480 kubelet[2425]: I0715 23:47:51.199602 2425 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 23:47:51.200480 kubelet[2425]: I0715 23:47:51.199642 2425 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:47:51.200480 kubelet[2425]: I0715 23:47:51.200213 2425 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 23:47:51.255552 kubelet[2425]: I0715 23:47:51.255515 2425 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:47:51.255837 kubelet[2425]: E0715 23:47:51.255797 2425 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 23:47:51.265209 kubelet[2425]: I0715 23:47:51.265179 2425 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:47:51.270289 kubelet[2425]: I0715 23:47:51.270252 2425 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:47:51.270670 kubelet[2425]: I0715 23:47:51.270628 2425 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:47:51.270894 kubelet[2425]: I0715 23:47:51.270657 2425 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:47:51.271095 kubelet[2425]: I0715 23:47:51.270896 2425 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:47:51.271095 kubelet[2425]: I0715 23:47:51.270914 2425 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 23:47:51.271095 kubelet[2425]: I0715 23:47:51.271091 2425 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:47:51.276911 kubelet[2425]: I0715 23:47:51.276864 2425 kubelet.go:480] "Attempting to sync node with API server" Jul 15 23:47:51.276911 kubelet[2425]: I0715 23:47:51.276910 2425 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:47:51.277070 kubelet[2425]: I0715 23:47:51.276949 2425 kubelet.go:386] "Adding apiserver pod source" Jul 15 23:47:51.277070 kubelet[2425]: I0715 23:47:51.276971 2425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:47:51.284477 kubelet[2425]: E0715 23:47:51.283910 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9&limit=500&resourceVersion=0\": dial tcp 10.128.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 23:47:51.284721 kubelet[2425]: E0715 23:47:51.284695 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 23:47:51.285262 kubelet[2425]: I0715 23:47:51.285241 2425 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:47:51.286173 kubelet[2425]: I0715 23:47:51.286125 2425 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 23:47:51.287908 kubelet[2425]: W0715 23:47:51.287862 2425 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:47:51.307958 kubelet[2425]: I0715 23:47:51.307925 2425 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:47:51.308075 kubelet[2425]: I0715 23:47:51.308003 2425 server.go:1289] "Started kubelet" Jul 15 23:47:51.308379 kubelet[2425]: I0715 23:47:51.308301 2425 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:47:51.310475 kubelet[2425]: I0715 23:47:51.309171 2425 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:47:51.310475 kubelet[2425]: I0715 23:47:51.309149 2425 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:47:51.310676 kubelet[2425]: I0715 23:47:51.310652 2425 server.go:317] "Adding debug handlers to kubelet server" Jul 15 23:47:51.314145 kubelet[2425]: I0715 23:47:51.313550 2425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:47:51.321769 kubelet[2425]: E0715 23:47:51.319724 2425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9.185291945764d0aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,UID:ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,},FirstTimestamp:2025-07-15 23:47:51.307956394 +0000 UTC m=+1.605379930,LastTimestamp:2025-07-15 23:47:51.307956394 +0000 UTC m=+1.605379930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,}" Jul 15 23:47:51.323041 kubelet[2425]: I0715 23:47:51.323013 2425 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:47:51.328597 kubelet[2425]: I0715 23:47:51.327203 2425 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:47:51.328597 kubelet[2425]: E0715 23:47:51.327466 2425 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" Jul 15 23:47:51.328597 kubelet[2425]: I0715 23:47:51.327554 2425 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:47:51.328597 kubelet[2425]: I0715 23:47:51.327629 2425 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:47:51.328597 kubelet[2425]: E0715 23:47:51.328103 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 23:47:51.328597 kubelet[2425]: E0715 23:47:51.328208 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9?timeout=10s\": dial tcp 10.128.0.95:6443: connect: connection refused" interval="200ms" Jul 15 23:47:51.329001 kubelet[2425]: I0715 23:47:51.328975 2425 factory.go:223] Registration of the systemd container factory successfully Jul 15 23:47:51.329480 kubelet[2425]: I0715 23:47:51.329433 2425 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:47:51.331283 kubelet[2425]: E0715 23:47:51.331260 2425 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:47:51.331757 kubelet[2425]: I0715 23:47:51.331737 2425 factory.go:223] Registration of the containerd container factory successfully Jul 15 23:47:51.345584 kubelet[2425]: I0715 23:47:51.345393 2425 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 23:47:51.347884 kubelet[2425]: I0715 23:47:51.347856 2425 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 23:47:51.348027 kubelet[2425]: I0715 23:47:51.348013 2425 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 23:47:51.348181 kubelet[2425]: I0715 23:47:51.348161 2425 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:47:51.348270 kubelet[2425]: I0715 23:47:51.348259 2425 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 23:47:51.348432 kubelet[2425]: E0715 23:47:51.348404 2425 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:47:51.359948 kubelet[2425]: E0715 23:47:51.359915 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 23:47:51.372181 kubelet[2425]: I0715 23:47:51.372155 2425 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:47:51.372341 kubelet[2425]: I0715 23:47:51.372307 2425 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:47:51.372410 kubelet[2425]: I0715 23:47:51.372354 2425 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:47:51.374638 kubelet[2425]: I0715 23:47:51.374614 2425 policy_none.go:49] "None policy: Start" Jul 15 23:47:51.374638 kubelet[2425]: I0715 23:47:51.374641 2425 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:47:51.374808 kubelet[2425]: I0715 23:47:51.374658 2425 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:47:51.382834 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:47:51.396662 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:47:51.401400 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:47:51.417488 kubelet[2425]: E0715 23:47:51.417443 2425 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 23:47:51.418206 kubelet[2425]: I0715 23:47:51.417707 2425 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:47:51.418206 kubelet[2425]: I0715 23:47:51.417902 2425 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:47:51.418206 kubelet[2425]: I0715 23:47:51.418152 2425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:47:51.420597 kubelet[2425]: E0715 23:47:51.420568 2425 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:47:51.420704 kubelet[2425]: E0715 23:47:51.420623 2425 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" Jul 15 23:47:51.499003 systemd[1]: Created slice kubepods-burstable-pod69e1686f489b313ec1683bc994ead47e.slice - libcontainer container kubepods-burstable-pod69e1686f489b313ec1683bc994ead47e.slice. Jul 15 23:47:51.510413 kubelet[2425]: E0715 23:47:51.510365 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.515611 systemd[1]: Created slice kubepods-burstable-pod788f309e1bbe73063d365fc18c870658.slice - libcontainer container kubepods-burstable-pod788f309e1bbe73063d365fc18c870658.slice. Jul 15 23:47:51.521481 kubelet[2425]: E0715 23:47:51.521099 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.529101 kubelet[2425]: I0715 23:47:51.529057 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/beab2261891974040050083d34a28f97-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"beab2261891974040050083d34a28f97\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.529286 kubelet[2425]: I0715 23:47:51.529262 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69e1686f489b313ec1683bc994ead47e-ca-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"69e1686f489b313ec1683bc994ead47e\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.529424 kubelet[2425]: E0715 23:47:51.529387 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9?timeout=10s\": dial tcp 10.128.0.95:6443: connect: connection refused" interval="400ms" Jul 15 23:47:51.529424 kubelet[2425]: I0715 23:47:51.529404 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69e1686f489b313ec1683bc994ead47e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"69e1686f489b313ec1683bc994ead47e\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.529579 kubelet[2425]: I0715 23:47:51.529540 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/788f309e1bbe73063d365fc18c870658-kubeconfig\") pod \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"788f309e1bbe73063d365fc18c870658\") " pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.529637 kubelet[2425]: I0715 23:47:51.529576 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/beab2261891974040050083d34a28f97-k8s-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"beab2261891974040050083d34a28f97\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.529637 kubelet[2425]: I0715 23:47:51.529607 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/69e1686f489b313ec1683bc994ead47e-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"69e1686f489b313ec1683bc994ead47e\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.529738 kubelet[2425]: I0715 23:47:51.529637 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69e1686f489b313ec1683bc994ead47e-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"69e1686f489b313ec1683bc994ead47e\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.529738 kubelet[2425]: I0715 23:47:51.529666 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69e1686f489b313ec1683bc994ead47e-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"69e1686f489b313ec1683bc994ead47e\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.529738 kubelet[2425]: I0715 23:47:51.529693 2425 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/beab2261891974040050083d34a28f97-ca-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"beab2261891974040050083d34a28f97\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.541010 kubelet[2425]: I0715 23:47:51.540987 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.541722 kubelet[2425]: E0715 23:47:51.541672 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.95:6443/api/v1/nodes\": dial tcp 10.128.0.95:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.560684 systemd[1]: Created slice kubepods-burstable-podbeab2261891974040050083d34a28f97.slice - libcontainer container kubepods-burstable-podbeab2261891974040050083d34a28f97.slice. Jul 15 23:47:51.563968 kubelet[2425]: E0715 23:47:51.563929 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.746595 kubelet[2425]: I0715 23:47:51.746535 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.746958 kubelet[2425]: E0715 23:47:51.746920 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.95:6443/api/v1/nodes\": dial tcp 10.128.0.95:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:51.812978 containerd[1579]: time="2025-07-15T23:47:51.812826040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,Uid:69e1686f489b313ec1683bc994ead47e,Namespace:kube-system,Attempt:0,}" Jul 15 23:47:51.824744 containerd[1579]: time="2025-07-15T23:47:51.824392730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,Uid:788f309e1bbe73063d365fc18c870658,Namespace:kube-system,Attempt:0,}" Jul 15 23:47:51.869663 containerd[1579]: time="2025-07-15T23:47:51.869603804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,Uid:beab2261891974040050083d34a28f97,Namespace:kube-system,Attempt:0,}" Jul 15 23:47:51.871335 containerd[1579]: time="2025-07-15T23:47:51.871280518Z" level=info msg="connecting to shim 81cd2749a3ba3668347fdded0b33e27f322b268358a380837592f3e37de6bcdb" address="unix:///run/containerd/s/94bfe5d8ebd84592cd31f3f1f6fd1f12af0c8e187e9e237d8c31c9b4b7d68f3f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:47:51.874639 containerd[1579]: time="2025-07-15T23:47:51.874593638Z" level=info msg="connecting to shim 7d7a6b4cccab86fc8f81c6a1fe9edba81c7ea3d30e81df72d308fa27027e6ecd" address="unix:///run/containerd/s/3b02dc379b8f67c578dcb35a1f03a597d1cf7b1c4499505506356b7b0fabacff" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:47:51.928543 containerd[1579]: time="2025-07-15T23:47:51.928415377Z" level=info msg="connecting to shim 68606ae22642eba049160672faadb3e2998c5eab2ab112101c7c95c42b6a521a" address="unix:///run/containerd/s/204cb6b140d19d6335c953a62b0a96c70579ead98fb81793e3c48e0eae65ae30" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:47:51.931075 kubelet[2425]: E0715 23:47:51.931022 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9?timeout=10s\": dial tcp 10.128.0.95:6443: connect: connection refused" interval="800ms" Jul 15 23:47:51.947856 systemd[1]: Started cri-containerd-7d7a6b4cccab86fc8f81c6a1fe9edba81c7ea3d30e81df72d308fa27027e6ecd.scope - libcontainer container 7d7a6b4cccab86fc8f81c6a1fe9edba81c7ea3d30e81df72d308fa27027e6ecd. Jul 15 23:47:51.963670 systemd[1]: Started cri-containerd-81cd2749a3ba3668347fdded0b33e27f322b268358a380837592f3e37de6bcdb.scope - libcontainer container 81cd2749a3ba3668347fdded0b33e27f322b268358a380837592f3e37de6bcdb. Jul 15 23:47:51.977515 systemd[1]: Started cri-containerd-68606ae22642eba049160672faadb3e2998c5eab2ab112101c7c95c42b6a521a.scope - libcontainer container 68606ae22642eba049160672faadb3e2998c5eab2ab112101c7c95c42b6a521a. Jul 15 23:47:52.083835 containerd[1579]: time="2025-07-15T23:47:52.083523393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,Uid:69e1686f489b313ec1683bc994ead47e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d7a6b4cccab86fc8f81c6a1fe9edba81c7ea3d30e81df72d308fa27027e6ecd\"" Jul 15 23:47:52.091108 kubelet[2425]: E0715 23:47:52.090111 2425 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c01" Jul 15 23:47:52.097943 containerd[1579]: time="2025-07-15T23:47:52.097901282Z" level=info msg="CreateContainer within sandbox \"7d7a6b4cccab86fc8f81c6a1fe9edba81c7ea3d30e81df72d308fa27027e6ecd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:47:52.099324 containerd[1579]: time="2025-07-15T23:47:52.099268249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,Uid:788f309e1bbe73063d365fc18c870658,Namespace:kube-system,Attempt:0,} returns sandbox id \"81cd2749a3ba3668347fdded0b33e27f322b268358a380837592f3e37de6bcdb\"" Jul 15 23:47:52.100878 kubelet[2425]: E0715 23:47:52.100846 2425 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df1" Jul 15 23:47:52.105434 containerd[1579]: time="2025-07-15T23:47:52.105401972Z" level=info msg="CreateContainer within sandbox \"81cd2749a3ba3668347fdded0b33e27f322b268358a380837592f3e37de6bcdb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:47:52.112429 containerd[1579]: time="2025-07-15T23:47:52.112395857Z" level=info msg="Container ab4e9dd69548e8461ab05a589335eb74048eb7a9b001a7b3d609628e4e778c17: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:47:52.113716 containerd[1579]: time="2025-07-15T23:47:52.113678431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,Uid:beab2261891974040050083d34a28f97,Namespace:kube-system,Attempt:0,} returns sandbox id \"68606ae22642eba049160672faadb3e2998c5eab2ab112101c7c95c42b6a521a\"" Jul 15 23:47:52.116315 kubelet[2425]: E0715 23:47:52.116261 2425 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df1" Jul 15 23:47:52.120056 containerd[1579]: time="2025-07-15T23:47:52.119831179Z" level=info msg="CreateContainer within sandbox \"68606ae22642eba049160672faadb3e2998c5eab2ab112101c7c95c42b6a521a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:47:52.124092 containerd[1579]: time="2025-07-15T23:47:52.124058385Z" level=info msg="Container 122df1621f0a9672dc63893839f8f3e86a877bb3cbcd0ffcaf6ea13a81d07246: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:47:52.125668 containerd[1579]: time="2025-07-15T23:47:52.125617238Z" level=info msg="CreateContainer within sandbox \"7d7a6b4cccab86fc8f81c6a1fe9edba81c7ea3d30e81df72d308fa27027e6ecd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ab4e9dd69548e8461ab05a589335eb74048eb7a9b001a7b3d609628e4e778c17\"" Jul 15 23:47:52.127031 containerd[1579]: time="2025-07-15T23:47:52.126992433Z" level=info msg="StartContainer for \"ab4e9dd69548e8461ab05a589335eb74048eb7a9b001a7b3d609628e4e778c17\"" Jul 15 23:47:52.129580 containerd[1579]: time="2025-07-15T23:47:52.129540692Z" level=info msg="connecting to shim ab4e9dd69548e8461ab05a589335eb74048eb7a9b001a7b3d609628e4e778c17" address="unix:///run/containerd/s/3b02dc379b8f67c578dcb35a1f03a597d1cf7b1c4499505506356b7b0fabacff" protocol=ttrpc version=3 Jul 15 23:47:52.138640 containerd[1579]: time="2025-07-15T23:47:52.138569784Z" level=info msg="Container d5f57ae00fd3efdae696ddb3668cae0d6cc0dfb41c2d286cbd4f086e1e1de699: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:47:52.139516 containerd[1579]: time="2025-07-15T23:47:52.139479243Z" level=info msg="CreateContainer within sandbox \"81cd2749a3ba3668347fdded0b33e27f322b268358a380837592f3e37de6bcdb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"122df1621f0a9672dc63893839f8f3e86a877bb3cbcd0ffcaf6ea13a81d07246\"" Jul 15 23:47:52.140607 containerd[1579]: time="2025-07-15T23:47:52.140580384Z" level=info msg="StartContainer for \"122df1621f0a9672dc63893839f8f3e86a877bb3cbcd0ffcaf6ea13a81d07246\"" Jul 15 23:47:52.143700 containerd[1579]: time="2025-07-15T23:47:52.143667272Z" level=info msg="connecting to shim 122df1621f0a9672dc63893839f8f3e86a877bb3cbcd0ffcaf6ea13a81d07246" address="unix:///run/containerd/s/94bfe5d8ebd84592cd31f3f1f6fd1f12af0c8e187e9e237d8c31c9b4b7d68f3f" protocol=ttrpc version=3 Jul 15 23:47:52.152490 containerd[1579]: time="2025-07-15T23:47:52.152427632Z" level=info msg="CreateContainer within sandbox \"68606ae22642eba049160672faadb3e2998c5eab2ab112101c7c95c42b6a521a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d5f57ae00fd3efdae696ddb3668cae0d6cc0dfb41c2d286cbd4f086e1e1de699\"" Jul 15 23:47:52.153187 kubelet[2425]: I0715 23:47:52.153153 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:52.154647 kubelet[2425]: E0715 23:47:52.154563 2425 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.95:6443/api/v1/nodes\": dial tcp 10.128.0.95:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:52.154781 containerd[1579]: time="2025-07-15T23:47:52.154749454Z" level=info msg="StartContainer for \"d5f57ae00fd3efdae696ddb3668cae0d6cc0dfb41c2d286cbd4f086e1e1de699\"" Jul 15 23:47:52.164283 containerd[1579]: time="2025-07-15T23:47:52.164249162Z" level=info msg="connecting to shim d5f57ae00fd3efdae696ddb3668cae0d6cc0dfb41c2d286cbd4f086e1e1de699" address="unix:///run/containerd/s/204cb6b140d19d6335c953a62b0a96c70579ead98fb81793e3c48e0eae65ae30" protocol=ttrpc version=3 Jul 15 23:47:52.171497 systemd[1]: Started cri-containerd-ab4e9dd69548e8461ab05a589335eb74048eb7a9b001a7b3d609628e4e778c17.scope - libcontainer container ab4e9dd69548e8461ab05a589335eb74048eb7a9b001a7b3d609628e4e778c17. Jul 15 23:47:52.190618 systemd[1]: Started cri-containerd-122df1621f0a9672dc63893839f8f3e86a877bb3cbcd0ffcaf6ea13a81d07246.scope - libcontainer container 122df1621f0a9672dc63893839f8f3e86a877bb3cbcd0ffcaf6ea13a81d07246. Jul 15 23:47:52.203572 kubelet[2425]: E0715 23:47:52.203510 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 23:47:52.207184 systemd[1]: Started cri-containerd-d5f57ae00fd3efdae696ddb3668cae0d6cc0dfb41c2d286cbd4f086e1e1de699.scope - libcontainer container d5f57ae00fd3efdae696ddb3668cae0d6cc0dfb41c2d286cbd4f086e1e1de699. Jul 15 23:47:52.223961 kubelet[2425]: E0715 23:47:52.223911 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9&limit=500&resourceVersion=0\": dial tcp 10.128.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 23:47:52.322366 kubelet[2425]: E0715 23:47:52.321621 2425 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 23:47:52.343446 containerd[1579]: time="2025-07-15T23:47:52.343309196Z" level=info msg="StartContainer for \"ab4e9dd69548e8461ab05a589335eb74048eb7a9b001a7b3d609628e4e778c17\" returns successfully" Jul 15 23:47:52.356152 containerd[1579]: time="2025-07-15T23:47:52.354877789Z" level=info msg="StartContainer for \"122df1621f0a9672dc63893839f8f3e86a877bb3cbcd0ffcaf6ea13a81d07246\" returns successfully" Jul 15 23:47:52.360476 containerd[1579]: time="2025-07-15T23:47:52.358234424Z" level=info msg="StartContainer for \"d5f57ae00fd3efdae696ddb3668cae0d6cc0dfb41c2d286cbd4f086e1e1de699\" returns successfully" Jul 15 23:47:52.377052 kubelet[2425]: E0715 23:47:52.377018 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:52.384860 kubelet[2425]: E0715 23:47:52.384830 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:52.386412 kubelet[2425]: E0715 23:47:52.386371 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:52.962772 kubelet[2425]: I0715 23:47:52.962733 2425 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:53.389209 kubelet[2425]: E0715 23:47:53.389065 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:53.389791 kubelet[2425]: E0715 23:47:53.389726 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:53.885949 kubelet[2425]: E0715 23:47:53.885904 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:54.391191 kubelet[2425]: E0715 23:47:54.391110 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:54.393220 kubelet[2425]: E0715 23:47:54.393183 2425 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:55.286300 kubelet[2425]: I0715 23:47:55.286257 2425 apiserver.go:52] "Watching apiserver" Jul 15 23:47:55.419900 kubelet[2425]: E0715 23:47:55.419822 2425 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:55.428156 kubelet[2425]: I0715 23:47:55.427618 2425 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:47:55.430867 kubelet[2425]: I0715 23:47:55.430522 2425 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:55.430867 kubelet[2425]: E0715 23:47:55.430565 2425 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\": node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" not found" Jul 15 23:47:55.527706 kubelet[2425]: I0715 23:47:55.527662 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:55.554882 kubelet[2425]: E0715 23:47:55.554668 2425 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9.185291945764d0aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,UID:ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,},FirstTimestamp:2025-07-15 23:47:51.307956394 +0000 UTC m=+1.605379930,LastTimestamp:2025-07-15 23:47:51.307956394 +0000 UTC m=+1.605379930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9,}" Jul 15 23:47:55.610866 kubelet[2425]: E0715 23:47:55.610801 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:55.610866 kubelet[2425]: I0715 23:47:55.610865 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:55.629098 kubelet[2425]: E0715 23:47:55.628815 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:55.629098 kubelet[2425]: I0715 23:47:55.628858 2425 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:55.645487 kubelet[2425]: E0715 23:47:55.645435 2425 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:57.171551 systemd[1]: Reload requested from client PID 2700 ('systemctl') (unit session-9.scope)... Jul 15 23:47:57.171573 systemd[1]: Reloading... Jul 15 23:47:57.322495 zram_generator::config[2744]: No configuration found. Jul 15 23:47:57.443803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:47:57.639526 systemd[1]: Reloading finished in 467 ms. Jul 15 23:47:57.676991 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:47:57.698280 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:47:57.698696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:47:57.698799 systemd[1]: kubelet.service: Consumed 2.133s CPU time, 130.3M memory peak. Jul 15 23:47:57.701875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:47:58.077102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:47:58.088108 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:47:58.157486 kubelet[2792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:47:58.157486 kubelet[2792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:47:58.157486 kubelet[2792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:47:58.157486 kubelet[2792]: I0715 23:47:58.154929 2792 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:47:58.168315 kubelet[2792]: I0715 23:47:58.168262 2792 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 23:47:58.168315 kubelet[2792]: I0715 23:47:58.168309 2792 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:47:58.169297 kubelet[2792]: I0715 23:47:58.168708 2792 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 23:47:58.172501 kubelet[2792]: I0715 23:47:58.171329 2792 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 23:47:58.175607 kubelet[2792]: I0715 23:47:58.175577 2792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:47:58.183848 kubelet[2792]: I0715 23:47:58.183828 2792 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:47:58.189552 kubelet[2792]: I0715 23:47:58.189507 2792 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:47:58.190405 kubelet[2792]: I0715 23:47:58.190019 2792 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:47:58.190405 kubelet[2792]: I0715 23:47:58.190066 2792 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:47:58.190405 kubelet[2792]: I0715 23:47:58.190286 2792 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:47:58.190405 kubelet[2792]: I0715 23:47:58.190302 2792 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 23:47:58.190764 kubelet[2792]: I0715 23:47:58.190360 2792 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:47:58.191126 kubelet[2792]: I0715 23:47:58.191097 2792 kubelet.go:480] "Attempting to sync node with API server" Jul 15 23:47:58.191925 kubelet[2792]: I0715 23:47:58.191902 2792 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:47:58.192056 kubelet[2792]: I0715 23:47:58.192044 2792 kubelet.go:386] "Adding apiserver pod source" Jul 15 23:47:58.192469 kubelet[2792]: I0715 23:47:58.192254 2792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:47:58.197636 sudo[2806]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 23:47:58.198209 sudo[2806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 23:47:58.222479 kubelet[2792]: I0715 23:47:58.219319 2792 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:47:58.222479 kubelet[2792]: I0715 23:47:58.221286 2792 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 23:47:58.250685 kubelet[2792]: I0715 23:47:58.250635 2792 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:47:58.250971 kubelet[2792]: I0715 23:47:58.250904 2792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:47:58.251413 kubelet[2792]: I0715 23:47:58.251384 2792 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:47:58.256999 kubelet[2792]: I0715 23:47:58.254833 2792 server.go:1289] "Started kubelet" Jul 15 23:47:58.256999 kubelet[2792]: I0715 23:47:58.256065 2792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:47:58.256999 kubelet[2792]: I0715 23:47:58.256898 2792 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:47:58.259843 kubelet[2792]: I0715 23:47:58.259821 2792 server.go:317] "Adding debug handlers to kubelet server" Jul 15 23:47:58.262772 kubelet[2792]: I0715 23:47:58.262742 2792 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:47:58.268838 kubelet[2792]: I0715 23:47:58.263905 2792 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:47:58.269636 kubelet[2792]: I0715 23:47:58.263921 2792 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:47:58.270427 kubelet[2792]: I0715 23:47:58.270018 2792 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:47:58.280475 kubelet[2792]: E0715 23:47:58.280189 2792 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:47:58.282427 kubelet[2792]: I0715 23:47:58.282331 2792 factory.go:223] Registration of the containerd container factory successfully Jul 15 23:47:58.282628 kubelet[2792]: I0715 23:47:58.282486 2792 factory.go:223] Registration of the systemd container factory successfully Jul 15 23:47:58.282693 kubelet[2792]: I0715 23:47:58.282671 2792 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:47:58.304152 kubelet[2792]: I0715 23:47:58.301961 2792 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 23:47:58.349238 kubelet[2792]: I0715 23:47:58.349129 2792 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 23:47:58.352544 kubelet[2792]: I0715 23:47:58.352515 2792 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 23:47:58.353492 kubelet[2792]: I0715 23:47:58.352656 2792 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:47:58.353492 kubelet[2792]: I0715 23:47:58.352674 2792 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 23:47:58.353492 kubelet[2792]: E0715 23:47:58.352733 2792 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:47:58.407871 kubelet[2792]: I0715 23:47:58.407834 2792 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:47:58.408021 kubelet[2792]: I0715 23:47:58.407899 2792 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:47:58.408021 kubelet[2792]: I0715 23:47:58.407926 2792 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:47:58.408703 kubelet[2792]: I0715 23:47:58.408128 2792 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:47:58.408703 kubelet[2792]: I0715 23:47:58.408146 2792 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:47:58.408703 kubelet[2792]: I0715 23:47:58.408171 2792 policy_none.go:49] "None policy: Start" Jul 15 23:47:58.408703 kubelet[2792]: I0715 23:47:58.408186 2792 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:47:58.408703 kubelet[2792]: I0715 23:47:58.408203 2792 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:47:58.408703 kubelet[2792]: I0715 23:47:58.408349 2792 state_mem.go:75] "Updated machine memory state" Jul 15 23:47:58.416244 kubelet[2792]: E0715 23:47:58.416212 2792 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 23:47:58.416440 kubelet[2792]: I0715 23:47:58.416420 2792 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:47:58.417732 kubelet[2792]: I0715 23:47:58.416443 2792 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:47:58.418281 kubelet[2792]: I0715 23:47:58.417904 2792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:47:58.425082 kubelet[2792]: E0715 23:47:58.424883 2792 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:47:58.454222 kubelet[2792]: I0715 23:47:58.454184 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.458896 kubelet[2792]: I0715 23:47:58.458863 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.459813 kubelet[2792]: I0715 23:47:58.459741 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.473903 kubelet[2792]: I0715 23:47:58.473754 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jul 15 23:47:58.474785 kubelet[2792]: I0715 23:47:58.474442 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jul 15 23:47:58.480712 kubelet[2792]: I0715 23:47:58.480531 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jul 15 23:47:58.537661 kubelet[2792]: I0715 23:47:58.537622 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.547715 kubelet[2792]: I0715 23:47:58.547679 2792 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.547865 kubelet[2792]: I0715 23:47:58.547780 2792 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.571148 kubelet[2792]: I0715 23:47:58.570994 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/beab2261891974040050083d34a28f97-ca-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"beab2261891974040050083d34a28f97\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.571148 kubelet[2792]: I0715 23:47:58.571046 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/beab2261891974040050083d34a28f97-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"beab2261891974040050083d34a28f97\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.571148 kubelet[2792]: I0715 23:47:58.571082 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/69e1686f489b313ec1683bc994ead47e-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"69e1686f489b313ec1683bc994ead47e\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.571148 kubelet[2792]: I0715 23:47:58.571117 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69e1686f489b313ec1683bc994ead47e-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"69e1686f489b313ec1683bc994ead47e\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.571505 kubelet[2792]: I0715 23:47:58.571146 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69e1686f489b313ec1683bc994ead47e-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"69e1686f489b313ec1683bc994ead47e\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.571505 kubelet[2792]: I0715 23:47:58.571174 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/beab2261891974040050083d34a28f97-k8s-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"beab2261891974040050083d34a28f97\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.571505 kubelet[2792]: I0715 23:47:58.571199 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69e1686f489b313ec1683bc994ead47e-ca-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"69e1686f489b313ec1683bc994ead47e\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.571505 kubelet[2792]: I0715 23:47:58.571225 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69e1686f489b313ec1683bc994ead47e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"69e1686f489b313ec1683bc994ead47e\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:58.571713 kubelet[2792]: I0715 23:47:58.571258 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/788f309e1bbe73063d365fc18c870658-kubeconfig\") pod \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" (UID: \"788f309e1bbe73063d365fc18c870658\") " pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:59.002366 sudo[2806]: pam_unix(sudo:session): session closed for user root Jul 15 23:47:59.149582 update_engine[1563]: I20250715 23:47:59.149489 1563 update_attempter.cc:509] Updating boot flags... Jul 15 23:47:59.196902 kubelet[2792]: I0715 23:47:59.196529 2792 apiserver.go:52] "Watching apiserver" Jul 15 23:47:59.272307 kubelet[2792]: I0715 23:47:59.271558 2792 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:47:59.435722 kubelet[2792]: I0715 23:47:59.431058 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:59.435722 kubelet[2792]: I0715 23:47:59.432269 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:59.449013 kubelet[2792]: I0715 23:47:59.448671 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jul 15 23:47:59.449013 kubelet[2792]: E0715 23:47:59.448769 2792 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" already exists" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:59.464837 kubelet[2792]: I0715 23:47:59.464101 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Jul 15 23:47:59.464837 kubelet[2792]: E0715 23:47:59.464214 2792 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" already exists" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" Jul 15 23:47:59.613925 kubelet[2792]: I0715 23:47:59.611834 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" podStartSLOduration=1.6116236430000002 podStartE2EDuration="1.611623643s" podCreationTimestamp="2025-07-15 23:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:47:59.556215758 +0000 UTC m=+1.459829823" watchObservedRunningTime="2025-07-15 23:47:59.611623643 +0000 UTC m=+1.515237703" Jul 15 23:47:59.654943 kubelet[2792]: I0715 23:47:59.654868 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" podStartSLOduration=1.65484057 podStartE2EDuration="1.65484057s" podCreationTimestamp="2025-07-15 23:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:47:59.616672833 +0000 UTC m=+1.520286887" watchObservedRunningTime="2025-07-15 23:47:59.65484057 +0000 UTC m=+1.558454631" Jul 15 23:47:59.656101 kubelet[2792]: I0715 23:47:59.656048 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9" podStartSLOduration=1.6560269779999999 podStartE2EDuration="1.656026978s" podCreationTimestamp="2025-07-15 23:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:47:59.648051646 +0000 UTC m=+1.551665709" watchObservedRunningTime="2025-07-15 23:47:59.656026978 +0000 UTC m=+1.559641041" Jul 15 23:48:01.406532 sudo[1882]: pam_unix(sudo:session): session closed for user root Jul 15 23:48:01.449363 sshd[1881]: Connection closed by 139.178.89.65 port 33104 Jul 15 23:48:01.450484 sshd-session[1879]: pam_unix(sshd:session): session closed for user core Jul 15 23:48:01.456614 systemd[1]: sshd@8-10.128.0.95:22-139.178.89.65:33104.service: Deactivated successfully. Jul 15 23:48:01.460159 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:48:01.460559 systemd[1]: session-9.scope: Consumed 6.529s CPU time, 276.9M memory peak. Jul 15 23:48:01.463572 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:48:01.466438 systemd-logind[1545]: Removed session 9. Jul 15 23:48:04.046591 kubelet[2792]: I0715 23:48:04.046553 2792 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:48:04.047341 containerd[1579]: time="2025-07-15T23:48:04.047259623Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:48:04.047883 kubelet[2792]: I0715 23:48:04.047581 2792 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:48:04.922750 systemd[1]: Created slice kubepods-besteffort-pod28726548_5203_40ae_bfc2_80b75ae987a1.slice - libcontainer container kubepods-besteffort-pod28726548_5203_40ae_bfc2_80b75ae987a1.slice. Jul 15 23:48:04.943418 systemd[1]: Created slice kubepods-burstable-podd8109478_ab16_4fc1_b5ca_7ca6ac6330e5.slice - libcontainer container kubepods-burstable-podd8109478_ab16_4fc1_b5ca_7ca6ac6330e5.slice. Jul 15 23:48:05.017309 kubelet[2792]: I0715 23:48:05.017232 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2fdc\" (UniqueName: \"kubernetes.io/projected/28726548-5203-40ae-bfc2-80b75ae987a1-kube-api-access-m2fdc\") pod \"kube-proxy-zsk67\" (UID: \"28726548-5203-40ae-bfc2-80b75ae987a1\") " pod="kube-system/kube-proxy-zsk67" Jul 15 23:48:05.017309 kubelet[2792]: I0715 23:48:05.017299 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28726548-5203-40ae-bfc2-80b75ae987a1-kube-proxy\") pod \"kube-proxy-zsk67\" (UID: \"28726548-5203-40ae-bfc2-80b75ae987a1\") " pod="kube-system/kube-proxy-zsk67" Jul 15 23:48:05.017569 kubelet[2792]: I0715 23:48:05.017327 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28726548-5203-40ae-bfc2-80b75ae987a1-xtables-lock\") pod \"kube-proxy-zsk67\" (UID: \"28726548-5203-40ae-bfc2-80b75ae987a1\") " pod="kube-system/kube-proxy-zsk67" Jul 15 23:48:05.017569 kubelet[2792]: I0715 23:48:05.017350 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28726548-5203-40ae-bfc2-80b75ae987a1-lib-modules\") pod \"kube-proxy-zsk67\" (UID: \"28726548-5203-40ae-bfc2-80b75ae987a1\") " pod="kube-system/kube-proxy-zsk67" Jul 15 23:48:05.118092 kubelet[2792]: I0715 23:48:05.118043 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-bpf-maps\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.118092 kubelet[2792]: I0715 23:48:05.118101 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-cgroup\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.118887 kubelet[2792]: I0715 23:48:05.118128 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-lib-modules\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.118887 kubelet[2792]: I0715 23:48:05.118150 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-clustermesh-secrets\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.118887 kubelet[2792]: I0715 23:48:05.118173 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-config-path\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.118887 kubelet[2792]: I0715 23:48:05.118198 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-host-proc-sys-net\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.118887 kubelet[2792]: I0715 23:48:05.118232 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-etc-cni-netd\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.119502 kubelet[2792]: I0715 23:48:05.118258 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-host-proc-sys-kernel\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.119502 kubelet[2792]: I0715 23:48:05.118290 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-hubble-tls\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.119502 kubelet[2792]: I0715 23:48:05.118347 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-run\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.119502 kubelet[2792]: I0715 23:48:05.118371 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-hostproc\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.119502 kubelet[2792]: I0715 23:48:05.118397 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cni-path\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.119502 kubelet[2792]: I0715 23:48:05.118425 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-xtables-lock\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.120411 kubelet[2792]: I0715 23:48:05.118476 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv2lc\" (UniqueName: \"kubernetes.io/projected/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-kube-api-access-dv2lc\") pod \"cilium-gvbmj\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " pod="kube-system/cilium-gvbmj" Jul 15 23:48:05.245597 containerd[1579]: time="2025-07-15T23:48:05.243964202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zsk67,Uid:28726548-5203-40ae-bfc2-80b75ae987a1,Namespace:kube-system,Attempt:0,}" Jul 15 23:48:05.310229 containerd[1579]: time="2025-07-15T23:48:05.309992461Z" level=info msg="connecting to shim b5cd897e9b2f6c3cbbf3e413838cd56e11f73fd6865d26c5b9a39be995f3ea15" address="unix:///run/containerd/s/f1f0a379a344df17b118e1d1289500c9600701da984cb47ad1088740d2de8936" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:48:05.317859 systemd[1]: Created slice kubepods-besteffort-pod0734723d_a40d_4b10_897f_745895fb5023.slice - libcontainer container kubepods-besteffort-pod0734723d_a40d_4b10_897f_745895fb5023.slice. Jul 15 23:48:05.373689 systemd[1]: Started cri-containerd-b5cd897e9b2f6c3cbbf3e413838cd56e11f73fd6865d26c5b9a39be995f3ea15.scope - libcontainer container b5cd897e9b2f6c3cbbf3e413838cd56e11f73fd6865d26c5b9a39be995f3ea15. Jul 15 23:48:05.422876 containerd[1579]: time="2025-07-15T23:48:05.422832161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zsk67,Uid:28726548-5203-40ae-bfc2-80b75ae987a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5cd897e9b2f6c3cbbf3e413838cd56e11f73fd6865d26c5b9a39be995f3ea15\"" Jul 15 23:48:05.423118 kubelet[2792]: I0715 23:48:05.423032 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9q74\" (UniqueName: \"kubernetes.io/projected/0734723d-a40d-4b10-897f-745895fb5023-kube-api-access-l9q74\") pod \"cilium-operator-6c4d7847fc-dgxl2\" (UID: \"0734723d-a40d-4b10-897f-745895fb5023\") " pod="kube-system/cilium-operator-6c4d7847fc-dgxl2" Jul 15 23:48:05.423235 kubelet[2792]: I0715 23:48:05.423137 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0734723d-a40d-4b10-897f-745895fb5023-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dgxl2\" (UID: \"0734723d-a40d-4b10-897f-745895fb5023\") " pod="kube-system/cilium-operator-6c4d7847fc-dgxl2" Jul 15 23:48:05.429590 containerd[1579]: time="2025-07-15T23:48:05.429490941Z" level=info msg="CreateContainer within sandbox \"b5cd897e9b2f6c3cbbf3e413838cd56e11f73fd6865d26c5b9a39be995f3ea15\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:48:05.439286 containerd[1579]: time="2025-07-15T23:48:05.439240939Z" level=info msg="Container 875c926e37e2e97ec1c4e9e1cc59068a3a4fa25db7b839415879c3146acd9d44: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:48:05.458198 containerd[1579]: time="2025-07-15T23:48:05.458139791Z" level=info msg="CreateContainer within sandbox \"b5cd897e9b2f6c3cbbf3e413838cd56e11f73fd6865d26c5b9a39be995f3ea15\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"875c926e37e2e97ec1c4e9e1cc59068a3a4fa25db7b839415879c3146acd9d44\"" Jul 15 23:48:05.458760 containerd[1579]: time="2025-07-15T23:48:05.458708724Z" level=info msg="StartContainer for \"875c926e37e2e97ec1c4e9e1cc59068a3a4fa25db7b839415879c3146acd9d44\"" Jul 15 23:48:05.461363 containerd[1579]: time="2025-07-15T23:48:05.461292921Z" level=info msg="connecting to shim 875c926e37e2e97ec1c4e9e1cc59068a3a4fa25db7b839415879c3146acd9d44" address="unix:///run/containerd/s/f1f0a379a344df17b118e1d1289500c9600701da984cb47ad1088740d2de8936" protocol=ttrpc version=3 Jul 15 23:48:05.488674 systemd[1]: Started cri-containerd-875c926e37e2e97ec1c4e9e1cc59068a3a4fa25db7b839415879c3146acd9d44.scope - libcontainer container 875c926e37e2e97ec1c4e9e1cc59068a3a4fa25db7b839415879c3146acd9d44. Jul 15 23:48:05.556400 containerd[1579]: time="2025-07-15T23:48:05.555368087Z" level=info msg="StartContainer for \"875c926e37e2e97ec1c4e9e1cc59068a3a4fa25db7b839415879c3146acd9d44\" returns successfully" Jul 15 23:48:05.557772 containerd[1579]: time="2025-07-15T23:48:05.557728043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gvbmj,Uid:d8109478-ab16-4fc1-b5ca-7ca6ac6330e5,Namespace:kube-system,Attempt:0,}" Jul 15 23:48:05.583204 containerd[1579]: time="2025-07-15T23:48:05.583133455Z" level=info msg="connecting to shim 6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15" address="unix:///run/containerd/s/a8c75286d82f8c18c74b21bff4f5e5ca680acfbdae83761e0cb6ab534c44d3f8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:48:05.624709 systemd[1]: Started cri-containerd-6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15.scope - libcontainer container 6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15. Jul 15 23:48:05.629687 containerd[1579]: time="2025-07-15T23:48:05.629645817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dgxl2,Uid:0734723d-a40d-4b10-897f-745895fb5023,Namespace:kube-system,Attempt:0,}" Jul 15 23:48:05.672474 containerd[1579]: time="2025-07-15T23:48:05.672375481Z" level=info msg="connecting to shim f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e" address="unix:///run/containerd/s/4cf194bfc1b0ac6ad8955bdab561d43ccc7db62b85bd8c315c90777ea58d6583" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:48:05.691259 containerd[1579]: time="2025-07-15T23:48:05.690491470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gvbmj,Uid:d8109478-ab16-4fc1-b5ca-7ca6ac6330e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\"" Jul 15 23:48:05.696551 containerd[1579]: time="2025-07-15T23:48:05.696406017Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 23:48:05.720919 systemd[1]: Started cri-containerd-f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e.scope - libcontainer container f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e. Jul 15 23:48:05.836934 containerd[1579]: time="2025-07-15T23:48:05.835812528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dgxl2,Uid:0734723d-a40d-4b10-897f-745895fb5023,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\"" Jul 15 23:48:06.462135 kubelet[2792]: I0715 23:48:06.461665 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zsk67" podStartSLOduration=2.461642475 podStartE2EDuration="2.461642475s" podCreationTimestamp="2025-07-15 23:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:48:06.449788347 +0000 UTC m=+8.353402398" watchObservedRunningTime="2025-07-15 23:48:06.461642475 +0000 UTC m=+8.365256541" Jul 15 23:48:15.216239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524135700.mount: Deactivated successfully. Jul 15 23:48:17.899172 containerd[1579]: time="2025-07-15T23:48:17.899114284Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:48:17.900319 containerd[1579]: time="2025-07-15T23:48:17.900282249Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 15 23:48:17.902082 containerd[1579]: time="2025-07-15T23:48:17.901993797Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:48:17.905637 containerd[1579]: time="2025-07-15T23:48:17.905490820Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.208594746s" Jul 15 23:48:17.905637 containerd[1579]: time="2025-07-15T23:48:17.905539490Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 23:48:17.908613 containerd[1579]: time="2025-07-15T23:48:17.908577177Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 23:48:17.914370 containerd[1579]: time="2025-07-15T23:48:17.914297399Z" level=info msg="CreateContainer within sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:48:17.945492 containerd[1579]: time="2025-07-15T23:48:17.941915809Z" level=info msg="Container 4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:48:17.944200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3649534744.mount: Deactivated successfully. Jul 15 23:48:17.955418 containerd[1579]: time="2025-07-15T23:48:17.955264732Z" level=info msg="CreateContainer within sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\"" Jul 15 23:48:17.956441 containerd[1579]: time="2025-07-15T23:48:17.956411005Z" level=info msg="StartContainer for \"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\"" Jul 15 23:48:17.957916 containerd[1579]: time="2025-07-15T23:48:17.957855934Z" level=info msg="connecting to shim 4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2" address="unix:///run/containerd/s/a8c75286d82f8c18c74b21bff4f5e5ca680acfbdae83761e0cb6ab534c44d3f8" protocol=ttrpc version=3 Jul 15 23:48:17.994662 systemd[1]: Started cri-containerd-4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2.scope - libcontainer container 4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2. Jul 15 23:48:18.037486 containerd[1579]: time="2025-07-15T23:48:18.036749593Z" level=info msg="StartContainer for \"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\" returns successfully" Jul 15 23:48:18.050592 systemd[1]: cri-containerd-4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2.scope: Deactivated successfully. Jul 15 23:48:18.054987 containerd[1579]: time="2025-07-15T23:48:18.054889923Z" level=info msg="received exit event container_id:\"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\" id:\"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\" pid:3235 exited_at:{seconds:1752623298 nanos:54229792}" Jul 15 23:48:18.055870 containerd[1579]: time="2025-07-15T23:48:18.055768025Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\" id:\"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\" pid:3235 exited_at:{seconds:1752623298 nanos:54229792}" Jul 15 23:48:18.928868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2-rootfs.mount: Deactivated successfully. Jul 15 23:48:21.155655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3919769296.mount: Deactivated successfully. Jul 15 23:48:21.496910 containerd[1579]: time="2025-07-15T23:48:21.496859848Z" level=info msg="CreateContainer within sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:48:21.508686 containerd[1579]: time="2025-07-15T23:48:21.508630875Z" level=info msg="Container 212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:48:21.524331 containerd[1579]: time="2025-07-15T23:48:21.524251118Z" level=info msg="CreateContainer within sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\"" Jul 15 23:48:21.525781 containerd[1579]: time="2025-07-15T23:48:21.525655239Z" level=info msg="StartContainer for \"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\"" Jul 15 23:48:21.527665 containerd[1579]: time="2025-07-15T23:48:21.527587544Z" level=info msg="connecting to shim 212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56" address="unix:///run/containerd/s/a8c75286d82f8c18c74b21bff4f5e5ca680acfbdae83761e0cb6ab534c44d3f8" protocol=ttrpc version=3 Jul 15 23:48:21.566709 systemd[1]: Started cri-containerd-212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56.scope - libcontainer container 212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56. Jul 15 23:48:21.614072 containerd[1579]: time="2025-07-15T23:48:21.613715723Z" level=info msg="StartContainer for \"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\" returns successfully" Jul 15 23:48:21.635028 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:48:21.635444 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:48:21.637040 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:48:21.641155 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:48:21.647038 systemd[1]: cri-containerd-212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56.scope: Deactivated successfully. Jul 15 23:48:21.650878 containerd[1579]: time="2025-07-15T23:48:21.650330749Z" level=info msg="received exit event container_id:\"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\" id:\"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\" pid:3288 exited_at:{seconds:1752623301 nanos:650055417}" Jul 15 23:48:21.651306 containerd[1579]: time="2025-07-15T23:48:21.651266200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\" id:\"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\" pid:3288 exited_at:{seconds:1752623301 nanos:650055417}" Jul 15 23:48:21.689118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:48:22.142387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56-rootfs.mount: Deactivated successfully. Jul 15 23:48:22.510076 containerd[1579]: time="2025-07-15T23:48:22.509990665Z" level=info msg="CreateContainer within sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:48:22.540481 containerd[1579]: time="2025-07-15T23:48:22.539341228Z" level=info msg="Container c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:48:22.552068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3517431974.mount: Deactivated successfully. Jul 15 23:48:22.568301 containerd[1579]: time="2025-07-15T23:48:22.568193295Z" level=info msg="CreateContainer within sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\"" Jul 15 23:48:22.570827 containerd[1579]: time="2025-07-15T23:48:22.569912921Z" level=info msg="StartContainer for \"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\"" Jul 15 23:48:22.574343 containerd[1579]: time="2025-07-15T23:48:22.574231642Z" level=info msg="connecting to shim c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37" address="unix:///run/containerd/s/a8c75286d82f8c18c74b21bff4f5e5ca680acfbdae83761e0cb6ab534c44d3f8" protocol=ttrpc version=3 Jul 15 23:48:22.615769 systemd[1]: Started cri-containerd-c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37.scope - libcontainer container c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37. Jul 15 23:48:22.713119 systemd[1]: cri-containerd-c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37.scope: Deactivated successfully. Jul 15 23:48:22.715488 containerd[1579]: time="2025-07-15T23:48:22.715362438Z" level=info msg="StartContainer for \"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\" returns successfully" Jul 15 23:48:22.716930 containerd[1579]: time="2025-07-15T23:48:22.716758693Z" level=info msg="received exit event container_id:\"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\" id:\"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\" pid:3340 exited_at:{seconds:1752623302 nanos:716298893}" Jul 15 23:48:22.718049 containerd[1579]: time="2025-07-15T23:48:22.717633837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\" id:\"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\" pid:3340 exited_at:{seconds:1752623302 nanos:716298893}" Jul 15 23:48:22.781396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37-rootfs.mount: Deactivated successfully. Jul 15 23:48:23.074885 containerd[1579]: time="2025-07-15T23:48:23.074646994Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:48:23.075791 containerd[1579]: time="2025-07-15T23:48:23.075734416Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 15 23:48:23.077045 containerd[1579]: time="2025-07-15T23:48:23.076979486Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:48:23.079128 containerd[1579]: time="2025-07-15T23:48:23.078599380Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.16997595s" Jul 15 23:48:23.079128 containerd[1579]: time="2025-07-15T23:48:23.078649450Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 23:48:23.084725 containerd[1579]: time="2025-07-15T23:48:23.084687228Z" level=info msg="CreateContainer within sandbox \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 23:48:23.093299 containerd[1579]: time="2025-07-15T23:48:23.093258411Z" level=info msg="Container 4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:48:23.101156 containerd[1579]: time="2025-07-15T23:48:23.101110424Z" level=info msg="CreateContainer within sandbox \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\"" Jul 15 23:48:23.102076 containerd[1579]: time="2025-07-15T23:48:23.102035955Z" level=info msg="StartContainer for \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\"" Jul 15 23:48:23.103812 containerd[1579]: time="2025-07-15T23:48:23.103688834Z" level=info msg="connecting to shim 4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45" address="unix:///run/containerd/s/4cf194bfc1b0ac6ad8955bdab561d43ccc7db62b85bd8c315c90777ea58d6583" protocol=ttrpc version=3 Jul 15 23:48:23.132653 systemd[1]: Started cri-containerd-4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45.scope - libcontainer container 4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45. Jul 15 23:48:23.181679 containerd[1579]: time="2025-07-15T23:48:23.181609360Z" level=info msg="StartContainer for \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\" returns successfully" Jul 15 23:48:23.537045 containerd[1579]: time="2025-07-15T23:48:23.536996096Z" level=info msg="CreateContainer within sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:48:23.558678 containerd[1579]: time="2025-07-15T23:48:23.558628373Z" level=info msg="Container d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:48:23.567839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2383190561.mount: Deactivated successfully. Jul 15 23:48:23.572037 containerd[1579]: time="2025-07-15T23:48:23.571925710Z" level=info msg="CreateContainer within sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\"" Jul 15 23:48:23.572990 containerd[1579]: time="2025-07-15T23:48:23.572813929Z" level=info msg="StartContainer for \"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\"" Jul 15 23:48:23.576213 containerd[1579]: time="2025-07-15T23:48:23.576131056Z" level=info msg="connecting to shim d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb" address="unix:///run/containerd/s/a8c75286d82f8c18c74b21bff4f5e5ca680acfbdae83761e0cb6ab534c44d3f8" protocol=ttrpc version=3 Jul 15 23:48:23.616124 kubelet[2792]: I0715 23:48:23.616036 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dgxl2" podStartSLOduration=1.380742799 podStartE2EDuration="18.616011231s" podCreationTimestamp="2025-07-15 23:48:05 +0000 UTC" firstStartedPulling="2025-07-15 23:48:05.844561861 +0000 UTC m=+7.748175900" lastFinishedPulling="2025-07-15 23:48:23.079830295 +0000 UTC m=+24.983444332" observedRunningTime="2025-07-15 23:48:23.540672547 +0000 UTC m=+25.444286608" watchObservedRunningTime="2025-07-15 23:48:23.616011231 +0000 UTC m=+25.519625294" Jul 15 23:48:23.634686 systemd[1]: Started cri-containerd-d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb.scope - libcontainer container d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb. Jul 15 23:48:23.759739 containerd[1579]: time="2025-07-15T23:48:23.759658597Z" level=info msg="StartContainer for \"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\" returns successfully" Jul 15 23:48:23.761396 systemd[1]: cri-containerd-d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb.scope: Deactivated successfully. Jul 15 23:48:23.765062 containerd[1579]: time="2025-07-15T23:48:23.765000423Z" level=info msg="received exit event container_id:\"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\" id:\"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\" pid:3418 exited_at:{seconds:1752623303 nanos:763776993}" Jul 15 23:48:23.766779 containerd[1579]: time="2025-07-15T23:48:23.765819116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\" id:\"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\" pid:3418 exited_at:{seconds:1752623303 nanos:763776993}" Jul 15 23:48:23.828705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb-rootfs.mount: Deactivated successfully. Jul 15 23:48:24.544504 containerd[1579]: time="2025-07-15T23:48:24.544236798Z" level=info msg="CreateContainer within sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:48:24.560765 containerd[1579]: time="2025-07-15T23:48:24.560633735Z" level=info msg="Container 3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:48:24.582328 containerd[1579]: time="2025-07-15T23:48:24.582265254Z" level=info msg="CreateContainer within sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\"" Jul 15 23:48:24.584215 containerd[1579]: time="2025-07-15T23:48:24.584154771Z" level=info msg="StartContainer for \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\"" Jul 15 23:48:24.586933 containerd[1579]: time="2025-07-15T23:48:24.586897509Z" level=info msg="connecting to shim 3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72" address="unix:///run/containerd/s/a8c75286d82f8c18c74b21bff4f5e5ca680acfbdae83761e0cb6ab534c44d3f8" protocol=ttrpc version=3 Jul 15 23:48:24.620706 systemd[1]: Started cri-containerd-3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72.scope - libcontainer container 3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72. Jul 15 23:48:24.681669 containerd[1579]: time="2025-07-15T23:48:24.681616370Z" level=info msg="StartContainer for \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" returns successfully" Jul 15 23:48:24.813044 containerd[1579]: time="2025-07-15T23:48:24.812905771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" id:\"01e9612dc85c20b9460a63fa6077a8e4437cb055c8eba65a0eea5d9943df0092\" pid:3488 exited_at:{seconds:1752623304 nanos:811614671}" Jul 15 23:48:24.840481 kubelet[2792]: I0715 23:48:24.839546 2792 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 23:48:24.905113 systemd[1]: Created slice kubepods-burstable-podb5fb9fd4_c322_484a_a0ba_7abfca6965c9.slice - libcontainer container kubepods-burstable-podb5fb9fd4_c322_484a_a0ba_7abfca6965c9.slice. Jul 15 23:48:24.925175 systemd[1]: Created slice kubepods-burstable-pode1af4baa_4d38_4d73_8c7d_19800e4c5dc0.slice - libcontainer container kubepods-burstable-pode1af4baa_4d38_4d73_8c7d_19800e4c5dc0.slice. Jul 15 23:48:24.974828 kubelet[2792]: I0715 23:48:24.974756 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5fb9fd4-c322-484a-a0ba-7abfca6965c9-config-volume\") pod \"coredns-674b8bbfcf-8hrnj\" (UID: \"b5fb9fd4-c322-484a-a0ba-7abfca6965c9\") " pod="kube-system/coredns-674b8bbfcf-8hrnj" Jul 15 23:48:24.974828 kubelet[2792]: I0715 23:48:24.974823 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smq9p\" (UniqueName: \"kubernetes.io/projected/b5fb9fd4-c322-484a-a0ba-7abfca6965c9-kube-api-access-smq9p\") pod \"coredns-674b8bbfcf-8hrnj\" (UID: \"b5fb9fd4-c322-484a-a0ba-7abfca6965c9\") " pod="kube-system/coredns-674b8bbfcf-8hrnj" Jul 15 23:48:24.975104 kubelet[2792]: I0715 23:48:24.974850 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1af4baa-4d38-4d73-8c7d-19800e4c5dc0-config-volume\") pod \"coredns-674b8bbfcf-dtnrb\" (UID: \"e1af4baa-4d38-4d73-8c7d-19800e4c5dc0\") " pod="kube-system/coredns-674b8bbfcf-dtnrb" Jul 15 23:48:24.975104 kubelet[2792]: I0715 23:48:24.974877 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88pvl\" (UniqueName: \"kubernetes.io/projected/e1af4baa-4d38-4d73-8c7d-19800e4c5dc0-kube-api-access-88pvl\") pod \"coredns-674b8bbfcf-dtnrb\" (UID: \"e1af4baa-4d38-4d73-8c7d-19800e4c5dc0\") " pod="kube-system/coredns-674b8bbfcf-dtnrb" Jul 15 23:48:25.220140 containerd[1579]: time="2025-07-15T23:48:25.220069809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8hrnj,Uid:b5fb9fd4-c322-484a-a0ba-7abfca6965c9,Namespace:kube-system,Attempt:0,}" Jul 15 23:48:25.233838 containerd[1579]: time="2025-07-15T23:48:25.233677507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dtnrb,Uid:e1af4baa-4d38-4d73-8c7d-19800e4c5dc0,Namespace:kube-system,Attempt:0,}" Jul 15 23:48:25.598851 kubelet[2792]: I0715 23:48:25.598297 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gvbmj" podStartSLOduration=9.386240895 podStartE2EDuration="21.598270877s" podCreationTimestamp="2025-07-15 23:48:04 +0000 UTC" firstStartedPulling="2025-07-15 23:48:05.694780891 +0000 UTC m=+7.598394949" lastFinishedPulling="2025-07-15 23:48:17.906810888 +0000 UTC m=+19.810424931" observedRunningTime="2025-07-15 23:48:25.595130601 +0000 UTC m=+27.498744664" watchObservedRunningTime="2025-07-15 23:48:25.598270877 +0000 UTC m=+27.501884940" Jul 15 23:48:27.303130 systemd-networkd[1460]: cilium_host: Link UP Jul 15 23:48:27.306604 systemd-networkd[1460]: cilium_net: Link UP Jul 15 23:48:27.306921 systemd-networkd[1460]: cilium_host: Gained carrier Jul 15 23:48:27.307197 systemd-networkd[1460]: cilium_net: Gained carrier Jul 15 23:48:27.451740 systemd-networkd[1460]: cilium_vxlan: Link UP Jul 15 23:48:27.452116 systemd-networkd[1460]: cilium_vxlan: Gained carrier Jul 15 23:48:27.581871 systemd-networkd[1460]: cilium_host: Gained IPv6LL Jul 15 23:48:27.731589 kernel: NET: Registered PF_ALG protocol family Jul 15 23:48:27.750633 systemd-networkd[1460]: cilium_net: Gained IPv6LL Jul 15 23:48:28.628990 systemd-networkd[1460]: lxc_health: Link UP Jul 15 23:48:28.634691 systemd-networkd[1460]: lxc_health: Gained carrier Jul 15 23:48:28.798087 systemd-networkd[1460]: cilium_vxlan: Gained IPv6LL Jul 15 23:48:29.282966 systemd-networkd[1460]: lxc0e08ef776679: Link UP Jul 15 23:48:29.302216 kernel: eth0: renamed from tmp1e854 Jul 15 23:48:29.308762 systemd-networkd[1460]: lxc0e08ef776679: Gained carrier Jul 15 23:48:29.320510 systemd-networkd[1460]: lxc330c1d5ef997: Link UP Jul 15 23:48:29.343491 kernel: eth0: renamed from tmp47501 Jul 15 23:48:29.348252 systemd-networkd[1460]: lxc330c1d5ef997: Gained carrier Jul 15 23:48:29.823741 systemd-networkd[1460]: lxc_health: Gained IPv6LL Jul 15 23:48:30.717958 systemd-networkd[1460]: lxc0e08ef776679: Gained IPv6LL Jul 15 23:48:30.845851 systemd-networkd[1460]: lxc330c1d5ef997: Gained IPv6LL Jul 15 23:48:31.327947 kubelet[2792]: I0715 23:48:31.326840 2792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:48:33.414839 ntpd[1535]: Listen normally on 7 cilium_host 192.168.0.230:123 Jul 15 23:48:33.416171 ntpd[1535]: 15 Jul 23:48:33 ntpd[1535]: Listen normally on 7 cilium_host 192.168.0.230:123 Jul 15 23:48:33.416171 ntpd[1535]: 15 Jul 23:48:33 ntpd[1535]: Listen normally on 8 cilium_net [fe80::b8d1:c0ff:fe3a:e513%4]:123 Jul 15 23:48:33.416171 ntpd[1535]: 15 Jul 23:48:33 ntpd[1535]: Listen normally on 9 cilium_host [fe80::d014:43ff:fe5d:86c6%5]:123 Jul 15 23:48:33.416171 ntpd[1535]: 15 Jul 23:48:33 ntpd[1535]: Listen normally on 10 cilium_vxlan [fe80::3440:eeff:feba:a84%6]:123 Jul 15 23:48:33.416171 ntpd[1535]: 15 Jul 23:48:33 ntpd[1535]: Listen normally on 11 lxc_health [fe80::b84a:24ff:fe94:9f66%8]:123 Jul 15 23:48:33.416171 ntpd[1535]: 15 Jul 23:48:33 ntpd[1535]: Listen normally on 12 lxc0e08ef776679 [fe80::3857:c0ff:fe43:a426%10]:123 Jul 15 23:48:33.416171 ntpd[1535]: 15 Jul 23:48:33 ntpd[1535]: Listen normally on 13 lxc330c1d5ef997 [fe80::500c:d5ff:fe86:62b0%12]:123 Jul 15 23:48:33.414969 ntpd[1535]: Listen normally on 8 cilium_net [fe80::b8d1:c0ff:fe3a:e513%4]:123 Jul 15 23:48:33.415048 ntpd[1535]: Listen normally on 9 cilium_host [fe80::d014:43ff:fe5d:86c6%5]:123 Jul 15 23:48:33.415110 ntpd[1535]: Listen normally on 10 cilium_vxlan [fe80::3440:eeff:feba:a84%6]:123 Jul 15 23:48:33.415166 ntpd[1535]: Listen normally on 11 lxc_health [fe80::b84a:24ff:fe94:9f66%8]:123 Jul 15 23:48:33.415397 ntpd[1535]: Listen normally on 12 lxc0e08ef776679 [fe80::3857:c0ff:fe43:a426%10]:123 Jul 15 23:48:33.415499 ntpd[1535]: Listen normally on 13 lxc330c1d5ef997 [fe80::500c:d5ff:fe86:62b0%12]:123 Jul 15 23:48:34.149232 containerd[1579]: time="2025-07-15T23:48:34.149133979Z" level=info msg="connecting to shim 47501da54f02b0d4eca7ffe3a3d61ceb5a183c176c0e395e8b2a2c6dabe33e03" address="unix:///run/containerd/s/3657e38b7f46983f78157d05754d010875e37abd678d1c6ff2d3aa8acef541f5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:48:34.213680 systemd[1]: Started cri-containerd-47501da54f02b0d4eca7ffe3a3d61ceb5a183c176c0e395e8b2a2c6dabe33e03.scope - libcontainer container 47501da54f02b0d4eca7ffe3a3d61ceb5a183c176c0e395e8b2a2c6dabe33e03. Jul 15 23:48:34.280194 containerd[1579]: time="2025-07-15T23:48:34.280113475Z" level=info msg="connecting to shim 1e854660f56941ebc627f1310aa5e6b88bc3eb8a5074df4fbdd15ae0e7eb5b67" address="unix:///run/containerd/s/4948d29f722160a95b614ce1fac6c2e6eed753b586557658356b61a63a535729" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:48:34.357439 systemd[1]: Started cri-containerd-1e854660f56941ebc627f1310aa5e6b88bc3eb8a5074df4fbdd15ae0e7eb5b67.scope - libcontainer container 1e854660f56941ebc627f1310aa5e6b88bc3eb8a5074df4fbdd15ae0e7eb5b67. Jul 15 23:48:34.393377 containerd[1579]: time="2025-07-15T23:48:34.393305799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dtnrb,Uid:e1af4baa-4d38-4d73-8c7d-19800e4c5dc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"47501da54f02b0d4eca7ffe3a3d61ceb5a183c176c0e395e8b2a2c6dabe33e03\"" Jul 15 23:48:34.406520 containerd[1579]: time="2025-07-15T23:48:34.404634256Z" level=info msg="CreateContainer within sandbox \"47501da54f02b0d4eca7ffe3a3d61ceb5a183c176c0e395e8b2a2c6dabe33e03\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:48:34.434920 containerd[1579]: time="2025-07-15T23:48:34.434869198Z" level=info msg="Container 8484c1a75eb95d4b62f75ce25c305d924f5dfb25f8600fc59429f41c5978b6eb: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:48:34.446581 containerd[1579]: time="2025-07-15T23:48:34.445241207Z" level=info msg="CreateContainer within sandbox \"47501da54f02b0d4eca7ffe3a3d61ceb5a183c176c0e395e8b2a2c6dabe33e03\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8484c1a75eb95d4b62f75ce25c305d924f5dfb25f8600fc59429f41c5978b6eb\"" Jul 15 23:48:34.447759 containerd[1579]: time="2025-07-15T23:48:34.447709951Z" level=info msg="StartContainer for \"8484c1a75eb95d4b62f75ce25c305d924f5dfb25f8600fc59429f41c5978b6eb\"" Jul 15 23:48:34.449365 containerd[1579]: time="2025-07-15T23:48:34.449312307Z" level=info msg="connecting to shim 8484c1a75eb95d4b62f75ce25c305d924f5dfb25f8600fc59429f41c5978b6eb" address="unix:///run/containerd/s/3657e38b7f46983f78157d05754d010875e37abd678d1c6ff2d3aa8acef541f5" protocol=ttrpc version=3 Jul 15 23:48:34.482722 systemd[1]: Started cri-containerd-8484c1a75eb95d4b62f75ce25c305d924f5dfb25f8600fc59429f41c5978b6eb.scope - libcontainer container 8484c1a75eb95d4b62f75ce25c305d924f5dfb25f8600fc59429f41c5978b6eb. Jul 15 23:48:34.516733 containerd[1579]: time="2025-07-15T23:48:34.516563999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8hrnj,Uid:b5fb9fd4-c322-484a-a0ba-7abfca6965c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e854660f56941ebc627f1310aa5e6b88bc3eb8a5074df4fbdd15ae0e7eb5b67\"" Jul 15 23:48:34.526946 containerd[1579]: time="2025-07-15T23:48:34.526523862Z" level=info msg="CreateContainer within sandbox \"1e854660f56941ebc627f1310aa5e6b88bc3eb8a5074df4fbdd15ae0e7eb5b67\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:48:34.542035 containerd[1579]: time="2025-07-15T23:48:34.541976397Z" level=info msg="Container aaa1fea0e4cd6a8f5cd5d9e21aa6d34766cd797da170eb2d35667c1702a6067d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:48:34.547365 containerd[1579]: time="2025-07-15T23:48:34.547281460Z" level=info msg="StartContainer for \"8484c1a75eb95d4b62f75ce25c305d924f5dfb25f8600fc59429f41c5978b6eb\" returns successfully" Jul 15 23:48:34.554521 containerd[1579]: time="2025-07-15T23:48:34.554478249Z" level=info msg="CreateContainer within sandbox \"1e854660f56941ebc627f1310aa5e6b88bc3eb8a5074df4fbdd15ae0e7eb5b67\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aaa1fea0e4cd6a8f5cd5d9e21aa6d34766cd797da170eb2d35667c1702a6067d\"" Jul 15 23:48:34.555358 containerd[1579]: time="2025-07-15T23:48:34.555321460Z" level=info msg="StartContainer for \"aaa1fea0e4cd6a8f5cd5d9e21aa6d34766cd797da170eb2d35667c1702a6067d\"" Jul 15 23:48:34.557550 containerd[1579]: time="2025-07-15T23:48:34.557486552Z" level=info msg="connecting to shim aaa1fea0e4cd6a8f5cd5d9e21aa6d34766cd797da170eb2d35667c1702a6067d" address="unix:///run/containerd/s/4948d29f722160a95b614ce1fac6c2e6eed753b586557658356b61a63a535729" protocol=ttrpc version=3 Jul 15 23:48:34.590969 systemd[1]: Started cri-containerd-aaa1fea0e4cd6a8f5cd5d9e21aa6d34766cd797da170eb2d35667c1702a6067d.scope - libcontainer container aaa1fea0e4cd6a8f5cd5d9e21aa6d34766cd797da170eb2d35667c1702a6067d. Jul 15 23:48:34.626861 kubelet[2792]: I0715 23:48:34.626767 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dtnrb" podStartSLOduration=29.626743554 podStartE2EDuration="29.626743554s" podCreationTimestamp="2025-07-15 23:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:48:34.626666087 +0000 UTC m=+36.530280150" watchObservedRunningTime="2025-07-15 23:48:34.626743554 +0000 UTC m=+36.530357618" Jul 15 23:48:34.716207 containerd[1579]: time="2025-07-15T23:48:34.716160149Z" level=info msg="StartContainer for \"aaa1fea0e4cd6a8f5cd5d9e21aa6d34766cd797da170eb2d35667c1702a6067d\" returns successfully" Jul 15 23:48:35.615033 kubelet[2792]: I0715 23:48:35.614947 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8hrnj" podStartSLOduration=30.614922833 podStartE2EDuration="30.614922833s" podCreationTimestamp="2025-07-15 23:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:48:35.613140349 +0000 UTC m=+37.516754435" watchObservedRunningTime="2025-07-15 23:48:35.614922833 +0000 UTC m=+37.518536899" Jul 15 23:48:52.105952 systemd[1]: Started sshd@9-10.128.0.95:22-139.178.89.65:41240.service - OpenSSH per-connection server daemon (139.178.89.65:41240). Jul 15 23:48:52.416241 sshd[4129]: Accepted publickey for core from 139.178.89.65 port 41240 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:48:52.418167 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:48:52.426017 systemd-logind[1545]: New session 10 of user core. Jul 15 23:48:52.430678 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:48:52.731919 sshd[4131]: Connection closed by 139.178.89.65 port 41240 Jul 15 23:48:52.733238 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Jul 15 23:48:52.739003 systemd[1]: sshd@9-10.128.0.95:22-139.178.89.65:41240.service: Deactivated successfully. Jul 15 23:48:52.742066 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:48:52.743679 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:48:52.745808 systemd-logind[1545]: Removed session 10. Jul 15 23:48:57.790583 systemd[1]: Started sshd@10-10.128.0.95:22-139.178.89.65:41242.service - OpenSSH per-connection server daemon (139.178.89.65:41242). Jul 15 23:48:58.095808 sshd[4144]: Accepted publickey for core from 139.178.89.65 port 41242 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:48:58.097943 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:48:58.105505 systemd-logind[1545]: New session 11 of user core. Jul 15 23:48:58.115711 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:48:58.390619 sshd[4146]: Connection closed by 139.178.89.65 port 41242 Jul 15 23:48:58.391787 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Jul 15 23:48:58.399832 systemd[1]: sshd@10-10.128.0.95:22-139.178.89.65:41242.service: Deactivated successfully. Jul 15 23:48:58.402602 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:48:58.405550 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:48:58.408592 systemd-logind[1545]: Removed session 11. Jul 15 23:49:03.446622 systemd[1]: Started sshd@11-10.128.0.95:22-139.178.89.65:37548.service - OpenSSH per-connection server daemon (139.178.89.65:37548). Jul 15 23:49:03.751355 sshd[4161]: Accepted publickey for core from 139.178.89.65 port 37548 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:03.753177 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:03.760517 systemd-logind[1545]: New session 12 of user core. Jul 15 23:49:03.765674 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:49:04.042793 sshd[4163]: Connection closed by 139.178.89.65 port 37548 Jul 15 23:49:04.043789 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:04.049682 systemd[1]: sshd@11-10.128.0.95:22-139.178.89.65:37548.service: Deactivated successfully. Jul 15 23:49:04.052574 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:49:04.054087 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:49:04.056102 systemd-logind[1545]: Removed session 12. Jul 15 23:49:09.097863 systemd[1]: Started sshd@12-10.128.0.95:22-139.178.89.65:42904.service - OpenSSH per-connection server daemon (139.178.89.65:42904). Jul 15 23:49:09.397619 sshd[4178]: Accepted publickey for core from 139.178.89.65 port 42904 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:09.399411 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:09.406547 systemd-logind[1545]: New session 13 of user core. Jul 15 23:49:09.413646 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:49:09.691054 sshd[4180]: Connection closed by 139.178.89.65 port 42904 Jul 15 23:49:09.691898 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:09.697618 systemd[1]: sshd@12-10.128.0.95:22-139.178.89.65:42904.service: Deactivated successfully. Jul 15 23:49:09.700518 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:49:09.701888 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:49:09.704242 systemd-logind[1545]: Removed session 13. Jul 15 23:49:14.744345 systemd[1]: Started sshd@13-10.128.0.95:22-139.178.89.65:42912.service - OpenSSH per-connection server daemon (139.178.89.65:42912). Jul 15 23:49:15.042314 sshd[4193]: Accepted publickey for core from 139.178.89.65 port 42912 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:15.044290 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:15.051723 systemd-logind[1545]: New session 14 of user core. Jul 15 23:49:15.058643 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:49:15.332470 sshd[4195]: Connection closed by 139.178.89.65 port 42912 Jul 15 23:49:15.333779 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:15.339623 systemd[1]: sshd@13-10.128.0.95:22-139.178.89.65:42912.service: Deactivated successfully. Jul 15 23:49:15.342740 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:49:15.344222 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:49:15.347048 systemd-logind[1545]: Removed session 14. Jul 15 23:49:15.389545 systemd[1]: Started sshd@14-10.128.0.95:22-139.178.89.65:42926.service - OpenSSH per-connection server daemon (139.178.89.65:42926). Jul 15 23:49:15.701307 sshd[4208]: Accepted publickey for core from 139.178.89.65 port 42926 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:15.703098 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:15.709590 systemd-logind[1545]: New session 15 of user core. Jul 15 23:49:15.722688 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:49:16.049353 sshd[4210]: Connection closed by 139.178.89.65 port 42926 Jul 15 23:49:16.050218 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:16.056550 systemd[1]: sshd@14-10.128.0.95:22-139.178.89.65:42926.service: Deactivated successfully. Jul 15 23:49:16.059633 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:49:16.060872 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:49:16.063377 systemd-logind[1545]: Removed session 15. Jul 15 23:49:16.102673 systemd[1]: Started sshd@15-10.128.0.95:22-139.178.89.65:42936.service - OpenSSH per-connection server daemon (139.178.89.65:42936). Jul 15 23:49:16.401001 sshd[4220]: Accepted publickey for core from 139.178.89.65 port 42936 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:16.402934 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:16.410820 systemd-logind[1545]: New session 16 of user core. Jul 15 23:49:16.415665 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:49:16.693707 sshd[4222]: Connection closed by 139.178.89.65 port 42936 Jul 15 23:49:16.694773 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:16.700485 systemd[1]: sshd@15-10.128.0.95:22-139.178.89.65:42936.service: Deactivated successfully. Jul 15 23:49:16.703445 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:49:16.707565 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:49:16.709363 systemd-logind[1545]: Removed session 16. Jul 15 23:49:21.756749 systemd[1]: Started sshd@16-10.128.0.95:22-139.178.89.65:49532.service - OpenSSH per-connection server daemon (139.178.89.65:49532). Jul 15 23:49:22.055777 sshd[4236]: Accepted publickey for core from 139.178.89.65 port 49532 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:22.057601 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:22.064575 systemd-logind[1545]: New session 17 of user core. Jul 15 23:49:22.069636 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:49:22.345570 sshd[4238]: Connection closed by 139.178.89.65 port 49532 Jul 15 23:49:22.346763 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:22.352771 systemd[1]: sshd@16-10.128.0.95:22-139.178.89.65:49532.service: Deactivated successfully. Jul 15 23:49:22.357443 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:49:22.358988 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:49:22.361063 systemd-logind[1545]: Removed session 17. Jul 15 23:49:23.818868 systemd[1]: Started sshd@17-10.128.0.95:22-195.178.110.125:33814.service - OpenSSH per-connection server daemon (195.178.110.125:33814). Jul 15 23:49:24.423153 sshd[4250]: Connection closed by authenticating user root 195.178.110.125 port 33814 [preauth] Jul 15 23:49:24.426290 systemd[1]: sshd@17-10.128.0.95:22-195.178.110.125:33814.service: Deactivated successfully. Jul 15 23:49:24.554608 systemd[1]: Started sshd@18-10.128.0.95:22-195.178.110.125:33824.service - OpenSSH per-connection server daemon (195.178.110.125:33824). Jul 15 23:49:25.178724 sshd[4255]: Connection closed by authenticating user root 195.178.110.125 port 33824 [preauth] Jul 15 23:49:25.181962 systemd[1]: sshd@18-10.128.0.95:22-195.178.110.125:33824.service: Deactivated successfully. Jul 15 23:49:25.316203 systemd[1]: Started sshd@19-10.128.0.95:22-195.178.110.125:33830.service - OpenSSH per-connection server daemon (195.178.110.125:33830). Jul 15 23:49:25.917174 sshd[4260]: Connection closed by authenticating user root 195.178.110.125 port 33830 [preauth] Jul 15 23:49:25.920385 systemd[1]: sshd@19-10.128.0.95:22-195.178.110.125:33830.service: Deactivated successfully. Jul 15 23:49:26.047327 systemd[1]: Started sshd@20-10.128.0.95:22-195.178.110.125:33840.service - OpenSSH per-connection server daemon (195.178.110.125:33840). Jul 15 23:49:26.658620 sshd[4265]: Connection closed by authenticating user root 195.178.110.125 port 33840 [preauth] Jul 15 23:49:26.661853 systemd[1]: sshd@20-10.128.0.95:22-195.178.110.125:33840.service: Deactivated successfully. Jul 15 23:49:26.781279 systemd[1]: Started sshd@21-10.128.0.95:22-195.178.110.125:33848.service - OpenSSH per-connection server daemon (195.178.110.125:33848). Jul 15 23:49:27.385778 sshd[4270]: Connection closed by authenticating user root 195.178.110.125 port 33848 [preauth] Jul 15 23:49:27.398330 systemd[1]: sshd@21-10.128.0.95:22-195.178.110.125:33848.service: Deactivated successfully. Jul 15 23:49:27.404827 systemd[1]: Started sshd@22-10.128.0.95:22-139.178.89.65:49548.service - OpenSSH per-connection server daemon (139.178.89.65:49548). Jul 15 23:49:27.703861 sshd[4275]: Accepted publickey for core from 139.178.89.65 port 49548 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:27.705761 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:27.713082 systemd-logind[1545]: New session 18 of user core. Jul 15 23:49:27.717658 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:49:27.995275 sshd[4277]: Connection closed by 139.178.89.65 port 49548 Jul 15 23:49:27.996669 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:28.001844 systemd[1]: sshd@22-10.128.0.95:22-139.178.89.65:49548.service: Deactivated successfully. Jul 15 23:49:28.005132 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:49:28.007164 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:49:28.009917 systemd-logind[1545]: Removed session 18. Jul 15 23:49:28.053579 systemd[1]: Started sshd@23-10.128.0.95:22-139.178.89.65:49550.service - OpenSSH per-connection server daemon (139.178.89.65:49550). Jul 15 23:49:28.359380 sshd[4289]: Accepted publickey for core from 139.178.89.65 port 49550 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:28.361260 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:28.368514 systemd-logind[1545]: New session 19 of user core. Jul 15 23:49:28.374153 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:49:28.717407 sshd[4291]: Connection closed by 139.178.89.65 port 49550 Jul 15 23:49:28.718260 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:28.725198 systemd[1]: sshd@23-10.128.0.95:22-139.178.89.65:49550.service: Deactivated successfully. Jul 15 23:49:28.728557 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:49:28.730264 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:49:28.732748 systemd-logind[1545]: Removed session 19. Jul 15 23:49:28.782662 systemd[1]: Started sshd@24-10.128.0.95:22-139.178.89.65:49564.service - OpenSSH per-connection server daemon (139.178.89.65:49564). Jul 15 23:49:29.088585 sshd[4301]: Accepted publickey for core from 139.178.89.65 port 49564 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:29.090620 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:29.097586 systemd-logind[1545]: New session 20 of user core. Jul 15 23:49:29.100632 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:49:29.986405 sshd[4303]: Connection closed by 139.178.89.65 port 49564 Jul 15 23:49:29.987343 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:29.995074 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:49:29.996268 systemd[1]: sshd@24-10.128.0.95:22-139.178.89.65:49564.service: Deactivated successfully. Jul 15 23:49:30.000316 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:49:30.004587 systemd-logind[1545]: Removed session 20. Jul 15 23:49:30.042600 systemd[1]: Started sshd@25-10.128.0.95:22-139.178.89.65:48456.service - OpenSSH per-connection server daemon (139.178.89.65:48456). Jul 15 23:49:30.345144 sshd[4320]: Accepted publickey for core from 139.178.89.65 port 48456 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:30.347862 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:30.361933 systemd-logind[1545]: New session 21 of user core. Jul 15 23:49:30.367677 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:49:30.766202 sshd[4322]: Connection closed by 139.178.89.65 port 48456 Jul 15 23:49:30.767062 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:30.772897 systemd[1]: sshd@25-10.128.0.95:22-139.178.89.65:48456.service: Deactivated successfully. Jul 15 23:49:30.775878 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:49:30.777157 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:49:30.779772 systemd-logind[1545]: Removed session 21. Jul 15 23:49:30.821934 systemd[1]: Started sshd@26-10.128.0.95:22-139.178.89.65:48458.service - OpenSSH per-connection server daemon (139.178.89.65:48458). Jul 15 23:49:31.124266 sshd[4332]: Accepted publickey for core from 139.178.89.65 port 48458 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:31.126168 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:31.132232 systemd-logind[1545]: New session 22 of user core. Jul 15 23:49:31.140650 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:49:31.409390 sshd[4334]: Connection closed by 139.178.89.65 port 48458 Jul 15 23:49:31.410653 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:31.415648 systemd[1]: sshd@26-10.128.0.95:22-139.178.89.65:48458.service: Deactivated successfully. Jul 15 23:49:31.418721 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:49:31.422185 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:49:31.423948 systemd-logind[1545]: Removed session 22. Jul 15 23:49:36.463860 systemd[1]: Started sshd@27-10.128.0.95:22-139.178.89.65:48460.service - OpenSSH per-connection server daemon (139.178.89.65:48460). Jul 15 23:49:36.771301 sshd[4349]: Accepted publickey for core from 139.178.89.65 port 48460 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:36.773432 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:36.780553 systemd-logind[1545]: New session 23 of user core. Jul 15 23:49:36.786707 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:49:37.070172 sshd[4351]: Connection closed by 139.178.89.65 port 48460 Jul 15 23:49:37.071662 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:37.078610 systemd[1]: sshd@27-10.128.0.95:22-139.178.89.65:48460.service: Deactivated successfully. Jul 15 23:49:37.081790 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:49:37.083274 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:49:37.086139 systemd-logind[1545]: Removed session 23. Jul 15 23:49:42.126230 systemd[1]: Started sshd@28-10.128.0.95:22-139.178.89.65:54928.service - OpenSSH per-connection server daemon (139.178.89.65:54928). Jul 15 23:49:42.429592 sshd[4364]: Accepted publickey for core from 139.178.89.65 port 54928 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:42.431549 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:42.437745 systemd-logind[1545]: New session 24 of user core. Jul 15 23:49:42.443659 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 23:49:42.723915 sshd[4366]: Connection closed by 139.178.89.65 port 54928 Jul 15 23:49:42.725181 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:42.730813 systemd[1]: sshd@28-10.128.0.95:22-139.178.89.65:54928.service: Deactivated successfully. Jul 15 23:49:42.735017 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 23:49:42.736877 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. Jul 15 23:49:42.739058 systemd-logind[1545]: Removed session 24. Jul 15 23:49:47.782564 systemd[1]: Started sshd@29-10.128.0.95:22-139.178.89.65:54930.service - OpenSSH per-connection server daemon (139.178.89.65:54930). Jul 15 23:49:48.087268 sshd[4378]: Accepted publickey for core from 139.178.89.65 port 54930 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:48.088999 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:48.098425 systemd-logind[1545]: New session 25 of user core. Jul 15 23:49:48.102716 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 23:49:48.380235 sshd[4381]: Connection closed by 139.178.89.65 port 54930 Jul 15 23:49:48.381303 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:48.387039 systemd[1]: sshd@29-10.128.0.95:22-139.178.89.65:54930.service: Deactivated successfully. Jul 15 23:49:48.390421 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 23:49:48.391991 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. Jul 15 23:49:48.395088 systemd-logind[1545]: Removed session 25. Jul 15 23:49:48.436108 systemd[1]: Started sshd@30-10.128.0.95:22-139.178.89.65:54938.service - OpenSSH per-connection server daemon (139.178.89.65:54938). Jul 15 23:49:48.745148 sshd[4394]: Accepted publickey for core from 139.178.89.65 port 54938 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:48.748189 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:48.762999 systemd-logind[1545]: New session 26 of user core. Jul 15 23:49:48.773076 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 23:49:50.381695 containerd[1579]: time="2025-07-15T23:49:50.381629628Z" level=info msg="StopContainer for \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\" with timeout 30 (s)" Jul 15 23:49:50.384967 containerd[1579]: time="2025-07-15T23:49:50.384921367Z" level=info msg="Stop container \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\" with signal terminated" Jul 15 23:49:50.471225 containerd[1579]: time="2025-07-15T23:49:50.471169094Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:49:50.495697 containerd[1579]: time="2025-07-15T23:49:50.494780795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" id:\"821ab551274673628c9b14b2fc0d41eea1dad8318d25d36bf01e2233a4f7242f\" pid:4421 exited_at:{seconds:1752623390 nanos:492603009}" Jul 15 23:49:50.495094 systemd[1]: cri-containerd-4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45.scope: Deactivated successfully. Jul 15 23:49:50.503143 containerd[1579]: time="2025-07-15T23:49:50.502615869Z" level=info msg="received exit event container_id:\"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\" id:\"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\" pid:3383 exited_at:{seconds:1752623390 nanos:501651007}" Jul 15 23:49:50.504988 containerd[1579]: time="2025-07-15T23:49:50.504938596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\" id:\"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\" pid:3383 exited_at:{seconds:1752623390 nanos:501651007}" Jul 15 23:49:50.507242 containerd[1579]: time="2025-07-15T23:49:50.507203248Z" level=info msg="StopContainer for \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" with timeout 2 (s)" Jul 15 23:49:50.508469 containerd[1579]: time="2025-07-15T23:49:50.508399255Z" level=info msg="Stop container \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" with signal terminated" Jul 15 23:49:50.524366 systemd-networkd[1460]: lxc_health: Link DOWN Jul 15 23:49:50.525226 systemd-networkd[1460]: lxc_health: Lost carrier Jul 15 23:49:50.548073 systemd[1]: cri-containerd-3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72.scope: Deactivated successfully. Jul 15 23:49:50.549190 systemd[1]: cri-containerd-3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72.scope: Consumed 9.073s CPU time, 123.1M memory peak, 136K read from disk, 13.3M written to disk. Jul 15 23:49:50.556194 containerd[1579]: time="2025-07-15T23:49:50.556143362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" id:\"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" pid:3456 exited_at:{seconds:1752623390 nanos:554115566}" Jul 15 23:49:50.556782 containerd[1579]: time="2025-07-15T23:49:50.556276580Z" level=info msg="received exit event container_id:\"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" id:\"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" pid:3456 exited_at:{seconds:1752623390 nanos:554115566}" Jul 15 23:49:50.563965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45-rootfs.mount: Deactivated successfully. Jul 15 23:49:50.599665 containerd[1579]: time="2025-07-15T23:49:50.599602399Z" level=info msg="StopContainer for \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\" returns successfully" Jul 15 23:49:50.600878 containerd[1579]: time="2025-07-15T23:49:50.600676613Z" level=info msg="StopPodSandbox for \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\"" Jul 15 23:49:50.600878 containerd[1579]: time="2025-07-15T23:49:50.600768260Z" level=info msg="Container to stop \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:49:50.610785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72-rootfs.mount: Deactivated successfully. Jul 15 23:49:50.620083 containerd[1579]: time="2025-07-15T23:49:50.620039675Z" level=info msg="StopContainer for \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" returns successfully" Jul 15 23:49:50.620918 systemd[1]: cri-containerd-f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e.scope: Deactivated successfully. Jul 15 23:49:50.622157 containerd[1579]: time="2025-07-15T23:49:50.622124730Z" level=info msg="StopPodSandbox for \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\"" Jul 15 23:49:50.623283 containerd[1579]: time="2025-07-15T23:49:50.623220992Z" level=info msg="Container to stop \"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:49:50.623283 containerd[1579]: time="2025-07-15T23:49:50.623255201Z" level=info msg="Container to stop \"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:49:50.624274 containerd[1579]: time="2025-07-15T23:49:50.624220557Z" level=info msg="Container to stop \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:49:50.624274 containerd[1579]: time="2025-07-15T23:49:50.624256983Z" level=info msg="Container to stop \"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:49:50.624274 containerd[1579]: time="2025-07-15T23:49:50.624274297Z" level=info msg="Container to stop \"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:49:50.628883 containerd[1579]: time="2025-07-15T23:49:50.628810444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\" id:\"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\" pid:3054 exit_status:137 exited_at:{seconds:1752623390 nanos:627873418}" Jul 15 23:49:50.636575 systemd[1]: cri-containerd-6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15.scope: Deactivated successfully. Jul 15 23:49:50.692857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15-rootfs.mount: Deactivated successfully. Jul 15 23:49:50.701672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e-rootfs.mount: Deactivated successfully. Jul 15 23:49:50.704665 containerd[1579]: time="2025-07-15T23:49:50.704207940Z" level=info msg="shim disconnected" id=f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e namespace=k8s.io Jul 15 23:49:50.704665 containerd[1579]: time="2025-07-15T23:49:50.704246219Z" level=warning msg="cleaning up after shim disconnected" id=f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e namespace=k8s.io Jul 15 23:49:50.704665 containerd[1579]: time="2025-07-15T23:49:50.704259485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:49:50.705818 containerd[1579]: time="2025-07-15T23:49:50.705690546Z" level=info msg="shim disconnected" id=6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15 namespace=k8s.io Jul 15 23:49:50.705818 containerd[1579]: time="2025-07-15T23:49:50.705728195Z" level=warning msg="cleaning up after shim disconnected" id=6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15 namespace=k8s.io Jul 15 23:49:50.705818 containerd[1579]: time="2025-07-15T23:49:50.705742795Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:49:50.729529 containerd[1579]: time="2025-07-15T23:49:50.729388704Z" level=info msg="received exit event sandbox_id:\"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\" exit_status:137 exited_at:{seconds:1752623390 nanos:627873418}" Jul 15 23:49:50.730275 containerd[1579]: time="2025-07-15T23:49:50.730231225Z" level=info msg="received exit event sandbox_id:\"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" exit_status:137 exited_at:{seconds:1752623390 nanos:643115822}" Jul 15 23:49:50.730585 containerd[1579]: time="2025-07-15T23:49:50.730552587Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" id:\"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" pid:3001 exit_status:137 exited_at:{seconds:1752623390 nanos:643115822}" Jul 15 23:49:50.731885 containerd[1579]: time="2025-07-15T23:49:50.731848022Z" level=info msg="TearDown network for sandbox \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\" successfully" Jul 15 23:49:50.732029 containerd[1579]: time="2025-07-15T23:49:50.732006428Z" level=info msg="StopPodSandbox for \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\" returns successfully" Jul 15 23:49:50.732878 containerd[1579]: time="2025-07-15T23:49:50.732847056Z" level=info msg="TearDown network for sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" successfully" Jul 15 23:49:50.735472 containerd[1579]: time="2025-07-15T23:49:50.733792275Z" level=info msg="StopPodSandbox for \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" returns successfully" Jul 15 23:49:50.736127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e-shm.mount: Deactivated successfully. Jul 15 23:49:50.780366 kubelet[2792]: I0715 23:49:50.780302 2792 scope.go:117] "RemoveContainer" containerID="4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45" Jul 15 23:49:50.785774 containerd[1579]: time="2025-07-15T23:49:50.785727298Z" level=info msg="RemoveContainer for \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\"" Jul 15 23:49:50.796932 containerd[1579]: time="2025-07-15T23:49:50.796844437Z" level=info msg="RemoveContainer for \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\" returns successfully" Jul 15 23:49:50.798012 kubelet[2792]: I0715 23:49:50.797254 2792 scope.go:117] "RemoveContainer" containerID="4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45" Jul 15 23:49:50.798256 kubelet[2792]: E0715 23:49:50.798208 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\": not found" containerID="4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45" Jul 15 23:49:50.798370 containerd[1579]: time="2025-07-15T23:49:50.798032850Z" level=error msg="ContainerStatus for \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\": not found" Jul 15 23:49:50.798578 kubelet[2792]: I0715 23:49:50.798251 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45"} err="failed to get container status \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c8426a905b256dc71a82fdd7d9b48e13c8de55e4b7a7a33b64f47c281cdaf45\": not found" Jul 15 23:49:50.798578 kubelet[2792]: I0715 23:49:50.798306 2792 scope.go:117] "RemoveContainer" containerID="3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72" Jul 15 23:49:50.802018 containerd[1579]: time="2025-07-15T23:49:50.801968814Z" level=info msg="RemoveContainer for \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\"" Jul 15 23:49:50.808402 containerd[1579]: time="2025-07-15T23:49:50.808280728Z" level=info msg="RemoveContainer for \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" returns successfully" Jul 15 23:49:50.808631 kubelet[2792]: I0715 23:49:50.808601 2792 scope.go:117] "RemoveContainer" containerID="d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb" Jul 15 23:49:50.810596 containerd[1579]: time="2025-07-15T23:49:50.810560352Z" level=info msg="RemoveContainer for \"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\"" Jul 15 23:49:50.816477 containerd[1579]: time="2025-07-15T23:49:50.816421292Z" level=info msg="RemoveContainer for \"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\" returns successfully" Jul 15 23:49:50.816828 kubelet[2792]: I0715 23:49:50.816785 2792 scope.go:117] "RemoveContainer" containerID="c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37" Jul 15 23:49:50.820001 containerd[1579]: time="2025-07-15T23:49:50.819960409Z" level=info msg="RemoveContainer for \"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\"" Jul 15 23:49:50.825755 containerd[1579]: time="2025-07-15T23:49:50.825698913Z" level=info msg="RemoveContainer for \"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\" returns successfully" Jul 15 23:49:50.826005 kubelet[2792]: I0715 23:49:50.825975 2792 scope.go:117] "RemoveContainer" containerID="212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56" Jul 15 23:49:50.828216 containerd[1579]: time="2025-07-15T23:49:50.828099523Z" level=info msg="RemoveContainer for \"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\"" Jul 15 23:49:50.833077 containerd[1579]: time="2025-07-15T23:49:50.833036064Z" level=info msg="RemoveContainer for \"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\" returns successfully" Jul 15 23:49:50.833335 kubelet[2792]: I0715 23:49:50.833282 2792 scope.go:117] "RemoveContainer" containerID="4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2" Jul 15 23:49:50.835357 containerd[1579]: time="2025-07-15T23:49:50.835281296Z" level=info msg="RemoveContainer for \"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\"" Jul 15 23:49:50.839956 containerd[1579]: time="2025-07-15T23:49:50.839920012Z" level=info msg="RemoveContainer for \"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\" returns successfully" Jul 15 23:49:50.840205 kubelet[2792]: I0715 23:49:50.840180 2792 scope.go:117] "RemoveContainer" containerID="3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72" Jul 15 23:49:50.840540 containerd[1579]: time="2025-07-15T23:49:50.840492852Z" level=error msg="ContainerStatus for \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\": not found" Jul 15 23:49:50.840770 kubelet[2792]: E0715 23:49:50.840678 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\": not found" containerID="3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72" Jul 15 23:49:50.840770 kubelet[2792]: I0715 23:49:50.840726 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72"} err="failed to get container status \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e48a335d479f2951e9056459198469148557ceebd4b77c5406fecd72eac9e72\": not found" Jul 15 23:49:50.840770 kubelet[2792]: I0715 23:49:50.840760 2792 scope.go:117] "RemoveContainer" containerID="d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb" Jul 15 23:49:50.841059 containerd[1579]: time="2025-07-15T23:49:50.841019234Z" level=error msg="ContainerStatus for \"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\": not found" Jul 15 23:49:50.841377 kubelet[2792]: E0715 23:49:50.841313 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\": not found" containerID="d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb" Jul 15 23:49:50.841377 kubelet[2792]: I0715 23:49:50.841349 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb"} err="failed to get container status \"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\": rpc error: code = NotFound desc = an error occurred when try to find container \"d645adfb684c439b019c89c16df3e59134cdccf4db0a449632c0828ef98b8ceb\": not found" Jul 15 23:49:50.841377 kubelet[2792]: I0715 23:49:50.841375 2792 scope.go:117] "RemoveContainer" containerID="c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37" Jul 15 23:49:50.841954 kubelet[2792]: E0715 23:49:50.841859 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\": not found" containerID="c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37" Jul 15 23:49:50.841954 kubelet[2792]: I0715 23:49:50.841914 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37"} err="failed to get container status \"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\": rpc error: code = NotFound desc = an error occurred when try to find container \"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\": not found" Jul 15 23:49:50.841954 kubelet[2792]: I0715 23:49:50.841938 2792 scope.go:117] "RemoveContainer" containerID="212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56" Jul 15 23:49:50.842199 containerd[1579]: time="2025-07-15T23:49:50.841705327Z" level=error msg="ContainerStatus for \"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c684cce088372d4f6c0e9ba1cec9bcff7c82f2a52d7a3501d1683408aa659b37\": not found" Jul 15 23:49:50.842378 containerd[1579]: time="2025-07-15T23:49:50.842265482Z" level=error msg="ContainerStatus for \"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\": not found" Jul 15 23:49:50.842612 kubelet[2792]: E0715 23:49:50.842513 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\": not found" containerID="212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56" Jul 15 23:49:50.842612 kubelet[2792]: I0715 23:49:50.842542 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56"} err="failed to get container status \"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\": rpc error: code = NotFound desc = an error occurred when try to find container \"212785afeb0027bb3db6f889a3258429ccfd423a0de25a7c513eb45666736b56\": not found" Jul 15 23:49:50.842612 kubelet[2792]: I0715 23:49:50.842564 2792 scope.go:117] "RemoveContainer" containerID="4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2" Jul 15 23:49:50.842928 kubelet[2792]: E0715 23:49:50.842897 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\": not found" containerID="4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2" Jul 15 23:49:50.843062 containerd[1579]: time="2025-07-15T23:49:50.842760274Z" level=error msg="ContainerStatus for \"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\": not found" Jul 15 23:49:50.843118 kubelet[2792]: I0715 23:49:50.842928 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2"} err="failed to get container status \"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"4579924b46ad6111e6d9559f8ab216323b2c47b8efeabaa1d65a8fe4a8d8b0a2\": not found" Jul 15 23:49:50.873485 kubelet[2792]: I0715 23:49:50.873382 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-config-path\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.873485 kubelet[2792]: I0715 23:49:50.873480 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cni-path\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.873733 kubelet[2792]: I0715 23:49:50.873511 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-xtables-lock\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.873733 kubelet[2792]: I0715 23:49:50.873542 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-hubble-tls\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.873733 kubelet[2792]: I0715 23:49:50.873563 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-hostproc\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.873733 kubelet[2792]: I0715 23:49:50.873593 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0734723d-a40d-4b10-897f-745895fb5023-cilium-config-path\") pod \"0734723d-a40d-4b10-897f-745895fb5023\" (UID: \"0734723d-a40d-4b10-897f-745895fb5023\") " Jul 15 23:49:50.873733 kubelet[2792]: I0715 23:49:50.873616 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-bpf-maps\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.873733 kubelet[2792]: I0715 23:49:50.873639 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-run\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.874030 kubelet[2792]: I0715 23:49:50.873663 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-host-proc-sys-kernel\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.874030 kubelet[2792]: I0715 23:49:50.873692 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-clustermesh-secrets\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.874030 kubelet[2792]: I0715 23:49:50.873718 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-cgroup\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.874030 kubelet[2792]: I0715 23:49:50.873746 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9q74\" (UniqueName: \"kubernetes.io/projected/0734723d-a40d-4b10-897f-745895fb5023-kube-api-access-l9q74\") pod \"0734723d-a40d-4b10-897f-745895fb5023\" (UID: \"0734723d-a40d-4b10-897f-745895fb5023\") " Jul 15 23:49:50.874030 kubelet[2792]: I0715 23:49:50.873776 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv2lc\" (UniqueName: \"kubernetes.io/projected/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-kube-api-access-dv2lc\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.874030 kubelet[2792]: I0715 23:49:50.873804 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-host-proc-sys-net\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.874317 kubelet[2792]: I0715 23:49:50.873829 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-etc-cni-netd\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.874317 kubelet[2792]: I0715 23:49:50.873852 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-lib-modules\") pod \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\" (UID: \"d8109478-ab16-4fc1-b5ca-7ca6ac6330e5\") " Jul 15 23:49:50.874317 kubelet[2792]: I0715 23:49:50.873959 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:49:50.874317 kubelet[2792]: I0715 23:49:50.874011 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cni-path" (OuterVolumeSpecName: "cni-path") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:49:50.874317 kubelet[2792]: I0715 23:49:50.874044 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:49:50.875711 kubelet[2792]: I0715 23:49:50.874816 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-hostproc" (OuterVolumeSpecName: "hostproc") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:49:50.877603 kubelet[2792]: I0715 23:49:50.877514 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:49:50.877603 kubelet[2792]: I0715 23:49:50.877568 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:49:50.877603 kubelet[2792]: I0715 23:49:50.877594 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:49:50.880842 kubelet[2792]: I0715 23:49:50.880783 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:49:50.882324 kubelet[2792]: I0715 23:49:50.881059 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:49:50.882919 kubelet[2792]: I0715 23:49:50.882842 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:49:50.883802 kubelet[2792]: I0715 23:49:50.883701 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0734723d-a40d-4b10-897f-745895fb5023-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0734723d-a40d-4b10-897f-745895fb5023" (UID: "0734723d-a40d-4b10-897f-745895fb5023"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:49:50.883802 kubelet[2792]: I0715 23:49:50.883765 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:49:50.883802 kubelet[2792]: I0715 23:49:50.883799 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:49:50.886781 kubelet[2792]: I0715 23:49:50.886737 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 23:49:50.892262 kubelet[2792]: I0715 23:49:50.891186 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0734723d-a40d-4b10-897f-745895fb5023-kube-api-access-l9q74" (OuterVolumeSpecName: "kube-api-access-l9q74") pod "0734723d-a40d-4b10-897f-745895fb5023" (UID: "0734723d-a40d-4b10-897f-745895fb5023"). InnerVolumeSpecName "kube-api-access-l9q74". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:49:50.892490 kubelet[2792]: I0715 23:49:50.892382 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-kube-api-access-dv2lc" (OuterVolumeSpecName: "kube-api-access-dv2lc") pod "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" (UID: "d8109478-ab16-4fc1-b5ca-7ca6ac6330e5"). InnerVolumeSpecName "kube-api-access-dv2lc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:49:50.975100 kubelet[2792]: I0715 23:49:50.975030 2792 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-hubble-tls\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975100 kubelet[2792]: I0715 23:49:50.975088 2792 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-hostproc\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975100 kubelet[2792]: I0715 23:49:50.975106 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0734723d-a40d-4b10-897f-745895fb5023-cilium-config-path\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975423 kubelet[2792]: I0715 23:49:50.975124 2792 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-bpf-maps\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975423 kubelet[2792]: I0715 23:49:50.975139 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-run\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975423 kubelet[2792]: I0715 23:49:50.975153 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-host-proc-sys-kernel\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975423 kubelet[2792]: I0715 23:49:50.975182 2792 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-clustermesh-secrets\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975423 kubelet[2792]: I0715 23:49:50.975197 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-cgroup\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975423 kubelet[2792]: I0715 23:49:50.975213 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9q74\" (UniqueName: \"kubernetes.io/projected/0734723d-a40d-4b10-897f-745895fb5023-kube-api-access-l9q74\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975423 kubelet[2792]: I0715 23:49:50.975227 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dv2lc\" (UniqueName: \"kubernetes.io/projected/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-kube-api-access-dv2lc\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975713 kubelet[2792]: I0715 23:49:50.975241 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-host-proc-sys-net\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975713 kubelet[2792]: I0715 23:49:50.975264 2792 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-etc-cni-netd\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975713 kubelet[2792]: I0715 23:49:50.975281 2792 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-lib-modules\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975713 kubelet[2792]: I0715 23:49:50.975295 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cilium-config-path\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975713 kubelet[2792]: I0715 23:49:50.975310 2792 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-cni-path\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:50.975713 kubelet[2792]: I0715 23:49:50.975327 2792 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5-xtables-lock\") on node \"ci-4372-0-1-nightly-20250715-2100-58c013afef0df13dc6a9\" DevicePath \"\"" Jul 15 23:49:51.087536 systemd[1]: Removed slice kubepods-besteffort-pod0734723d_a40d_4b10_897f_745895fb5023.slice - libcontainer container kubepods-besteffort-pod0734723d_a40d_4b10_897f_745895fb5023.slice. Jul 15 23:49:51.103394 systemd[1]: Removed slice kubepods-burstable-podd8109478_ab16_4fc1_b5ca_7ca6ac6330e5.slice - libcontainer container kubepods-burstable-podd8109478_ab16_4fc1_b5ca_7ca6ac6330e5.slice. Jul 15 23:49:51.103633 systemd[1]: kubepods-burstable-podd8109478_ab16_4fc1_b5ca_7ca6ac6330e5.slice: Consumed 9.220s CPU time, 123.5M memory peak, 136K read from disk, 13.3M written to disk. Jul 15 23:49:51.562033 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15-shm.mount: Deactivated successfully. Jul 15 23:49:51.562215 systemd[1]: var-lib-kubelet-pods-0734723d\x2da40d\x2d4b10\x2d897f\x2d745895fb5023-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9q74.mount: Deactivated successfully. Jul 15 23:49:51.562349 systemd[1]: var-lib-kubelet-pods-d8109478\x2dab16\x2d4fc1\x2db5ca\x2d7ca6ac6330e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddv2lc.mount: Deactivated successfully. Jul 15 23:49:51.562470 systemd[1]: var-lib-kubelet-pods-d8109478\x2dab16\x2d4fc1\x2db5ca\x2d7ca6ac6330e5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 23:49:51.562581 systemd[1]: var-lib-kubelet-pods-d8109478\x2dab16\x2d4fc1\x2db5ca\x2d7ca6ac6330e5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 23:49:52.333534 sshd[4396]: Connection closed by 139.178.89.65 port 54938 Jul 15 23:49:52.334551 sshd-session[4394]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:52.339615 systemd[1]: sshd@30-10.128.0.95:22-139.178.89.65:54938.service: Deactivated successfully. Jul 15 23:49:52.342649 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 23:49:52.344926 systemd-logind[1545]: Session 26 logged out. Waiting for processes to exit. Jul 15 23:49:52.347445 systemd-logind[1545]: Removed session 26. Jul 15 23:49:52.356589 kubelet[2792]: I0715 23:49:52.356528 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0734723d-a40d-4b10-897f-745895fb5023" path="/var/lib/kubelet/pods/0734723d-a40d-4b10-897f-745895fb5023/volumes" Jul 15 23:49:52.357332 kubelet[2792]: I0715 23:49:52.357294 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8109478-ab16-4fc1-b5ca-7ca6ac6330e5" path="/var/lib/kubelet/pods/d8109478-ab16-4fc1-b5ca-7ca6ac6330e5/volumes" Jul 15 23:49:52.392826 systemd[1]: Started sshd@31-10.128.0.95:22-139.178.89.65:53358.service - OpenSSH per-connection server daemon (139.178.89.65:53358). Jul 15 23:49:52.705163 sshd[4546]: Accepted publickey for core from 139.178.89.65 port 53358 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:52.706965 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:52.714647 systemd-logind[1545]: New session 27 of user core. Jul 15 23:49:52.717710 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 15 23:49:53.414835 ntpd[1535]: Deleting interface #11 lxc_health, fe80::b84a:24ff:fe94:9f66%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Jul 15 23:49:53.415555 ntpd[1535]: 15 Jul 23:49:53 ntpd[1535]: Deleting interface #11 lxc_health, fe80::b84a:24ff:fe94:9f66%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Jul 15 23:49:53.467717 kubelet[2792]: E0715 23:49:53.467579 2792 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 23:49:53.692487 sshd[4549]: Connection closed by 139.178.89.65 port 53358 Jul 15 23:49:53.694736 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:53.697563 systemd[1]: Created slice kubepods-burstable-pod595996b6_4027_4f8d_a291_4b455129b48e.slice - libcontainer container kubepods-burstable-pod595996b6_4027_4f8d_a291_4b455129b48e.slice. Jul 15 23:49:53.708894 systemd[1]: sshd@31-10.128.0.95:22-139.178.89.65:53358.service: Deactivated successfully. Jul 15 23:49:53.715300 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 23:49:53.718770 systemd-logind[1545]: Session 27 logged out. Waiting for processes to exit. Jul 15 23:49:53.723410 systemd-logind[1545]: Removed session 27. Jul 15 23:49:53.755153 systemd[1]: Started sshd@32-10.128.0.95:22-139.178.89.65:53364.service - OpenSSH per-connection server daemon (139.178.89.65:53364). Jul 15 23:49:53.792741 kubelet[2792]: I0715 23:49:53.792679 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/595996b6-4027-4f8d-a291-4b455129b48e-cilium-run\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.792741 kubelet[2792]: I0715 23:49:53.792739 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/595996b6-4027-4f8d-a291-4b455129b48e-etc-cni-netd\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.793006 kubelet[2792]: I0715 23:49:53.792766 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/595996b6-4027-4f8d-a291-4b455129b48e-cilium-ipsec-secrets\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.793006 kubelet[2792]: I0715 23:49:53.792796 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/595996b6-4027-4f8d-a291-4b455129b48e-host-proc-sys-net\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.793006 kubelet[2792]: I0715 23:49:53.792819 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/595996b6-4027-4f8d-a291-4b455129b48e-lib-modules\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.793006 kubelet[2792]: I0715 23:49:53.792847 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/595996b6-4027-4f8d-a291-4b455129b48e-bpf-maps\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.793006 kubelet[2792]: I0715 23:49:53.792873 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/595996b6-4027-4f8d-a291-4b455129b48e-cilium-cgroup\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.793006 kubelet[2792]: I0715 23:49:53.792895 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/595996b6-4027-4f8d-a291-4b455129b48e-xtables-lock\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.793292 kubelet[2792]: I0715 23:49:53.792918 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/595996b6-4027-4f8d-a291-4b455129b48e-clustermesh-secrets\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.793292 kubelet[2792]: I0715 23:49:53.792943 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/595996b6-4027-4f8d-a291-4b455129b48e-cilium-config-path\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.793292 kubelet[2792]: I0715 23:49:53.792971 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/595996b6-4027-4f8d-a291-4b455129b48e-hubble-tls\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.793292 kubelet[2792]: I0715 23:49:53.793002 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/595996b6-4027-4f8d-a291-4b455129b48e-hostproc\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.793292 kubelet[2792]: I0715 23:49:53.793027 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/595996b6-4027-4f8d-a291-4b455129b48e-host-proc-sys-kernel\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.794593 kubelet[2792]: I0715 23:49:53.793100 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhllm\" (UniqueName: \"kubernetes.io/projected/595996b6-4027-4f8d-a291-4b455129b48e-kube-api-access-jhllm\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:53.794593 kubelet[2792]: I0715 23:49:53.793130 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/595996b6-4027-4f8d-a291-4b455129b48e-cni-path\") pod \"cilium-rkxlz\" (UID: \"595996b6-4027-4f8d-a291-4b455129b48e\") " pod="kube-system/cilium-rkxlz" Jul 15 23:49:54.010715 containerd[1579]: time="2025-07-15T23:49:54.010564495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rkxlz,Uid:595996b6-4027-4f8d-a291-4b455129b48e,Namespace:kube-system,Attempt:0,}" Jul 15 23:49:54.035824 containerd[1579]: time="2025-07-15T23:49:54.035715850Z" level=info msg="connecting to shim 626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294" address="unix:///run/containerd/s/79df3206166de9a147fc4abb1290184d412c2c7ac8c9a8342269b71c184d7f4a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:49:54.083737 systemd[1]: Started cri-containerd-626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294.scope - libcontainer container 626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294. Jul 15 23:49:54.105319 sshd[4559]: Accepted publickey for core from 139.178.89.65 port 53364 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:54.107508 sshd-session[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:54.118534 systemd-logind[1545]: New session 28 of user core. Jul 15 23:49:54.126866 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 15 23:49:54.141791 containerd[1579]: time="2025-07-15T23:49:54.141654467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rkxlz,Uid:595996b6-4027-4f8d-a291-4b455129b48e,Namespace:kube-system,Attempt:0,} returns sandbox id \"626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294\"" Jul 15 23:49:54.154480 containerd[1579]: time="2025-07-15T23:49:54.153702244Z" level=info msg="CreateContainer within sandbox \"626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:49:54.170660 containerd[1579]: time="2025-07-15T23:49:54.170600451Z" level=info msg="Container 8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:49:54.181059 containerd[1579]: time="2025-07-15T23:49:54.180990620Z" level=info msg="CreateContainer within sandbox \"626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9\"" Jul 15 23:49:54.183249 containerd[1579]: time="2025-07-15T23:49:54.181870682Z" level=info msg="StartContainer for \"8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9\"" Jul 15 23:49:54.183249 containerd[1579]: time="2025-07-15T23:49:54.183158113Z" level=info msg="connecting to shim 8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9" address="unix:///run/containerd/s/79df3206166de9a147fc4abb1290184d412c2c7ac8c9a8342269b71c184d7f4a" protocol=ttrpc version=3 Jul 15 23:49:54.213981 systemd[1]: Started cri-containerd-8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9.scope - libcontainer container 8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9. Jul 15 23:49:54.263082 containerd[1579]: time="2025-07-15T23:49:54.262946248Z" level=info msg="StartContainer for \"8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9\" returns successfully" Jul 15 23:49:54.272963 systemd[1]: cri-containerd-8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9.scope: Deactivated successfully. Jul 15 23:49:54.278301 containerd[1579]: time="2025-07-15T23:49:54.277841049Z" level=info msg="received exit event container_id:\"8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9\" id:\"8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9\" pid:4625 exited_at:{seconds:1752623394 nanos:276885362}" Jul 15 23:49:54.278629 containerd[1579]: time="2025-07-15T23:49:54.278244234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9\" id:\"8c1e0304a4ccb0d52f0c9dbda5a206f559e4b29dc3cd74c8c70ea7ad82b7b1c9\" pid:4625 exited_at:{seconds:1752623394 nanos:276885362}" Jul 15 23:49:54.319482 sshd[4612]: Connection closed by 139.178.89.65 port 53364 Jul 15 23:49:54.320731 sshd-session[4559]: pam_unix(sshd:session): session closed for user core Jul 15 23:49:54.327592 systemd[1]: sshd@32-10.128.0.95:22-139.178.89.65:53364.service: Deactivated successfully. Jul 15 23:49:54.331182 systemd[1]: session-28.scope: Deactivated successfully. Jul 15 23:49:54.334978 systemd-logind[1545]: Session 28 logged out. Waiting for processes to exit. Jul 15 23:49:54.337047 systemd-logind[1545]: Removed session 28. Jul 15 23:49:54.379505 systemd[1]: Started sshd@33-10.128.0.95:22-139.178.89.65:53378.service - OpenSSH per-connection server daemon (139.178.89.65:53378). Jul 15 23:49:54.686745 sshd[4662]: Accepted publickey for core from 139.178.89.65 port 53378 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:49:54.688591 sshd-session[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:49:54.697041 systemd-logind[1545]: New session 29 of user core. Jul 15 23:49:54.701713 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 15 23:49:54.822252 containerd[1579]: time="2025-07-15T23:49:54.822191382Z" level=info msg="CreateContainer within sandbox \"626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:49:54.831814 containerd[1579]: time="2025-07-15T23:49:54.831754909Z" level=info msg="Container 59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:49:54.841366 containerd[1579]: time="2025-07-15T23:49:54.841285204Z" level=info msg="CreateContainer within sandbox \"626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89\"" Jul 15 23:49:54.842500 containerd[1579]: time="2025-07-15T23:49:54.842437682Z" level=info msg="StartContainer for \"59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89\"" Jul 15 23:49:54.843892 containerd[1579]: time="2025-07-15T23:49:54.843852038Z" level=info msg="connecting to shim 59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89" address="unix:///run/containerd/s/79df3206166de9a147fc4abb1290184d412c2c7ac8c9a8342269b71c184d7f4a" protocol=ttrpc version=3 Jul 15 23:49:54.882770 systemd[1]: Started cri-containerd-59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89.scope - libcontainer container 59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89. Jul 15 23:49:54.926603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062650043.mount: Deactivated successfully. Jul 15 23:49:55.020860 containerd[1579]: time="2025-07-15T23:49:55.020627720Z" level=info msg="StartContainer for \"59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89\" returns successfully" Jul 15 23:49:55.034085 systemd[1]: cri-containerd-59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89.scope: Deactivated successfully. Jul 15 23:49:55.037653 containerd[1579]: time="2025-07-15T23:49:55.037595272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89\" id:\"59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89\" pid:4681 exited_at:{seconds:1752623395 nanos:35760210}" Jul 15 23:49:55.037923 containerd[1579]: time="2025-07-15T23:49:55.037778841Z" level=info msg="received exit event container_id:\"59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89\" id:\"59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89\" pid:4681 exited_at:{seconds:1752623395 nanos:35760210}" Jul 15 23:49:55.072542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59836641149a9c7ef39d3bb9e81e2f6f6700f4cbee40330d13cca5994c7dfe89-rootfs.mount: Deactivated successfully. Jul 15 23:49:55.831724 containerd[1579]: time="2025-07-15T23:49:55.831527328Z" level=info msg="CreateContainer within sandbox \"626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:49:55.864731 containerd[1579]: time="2025-07-15T23:49:55.864666778Z" level=info msg="Container 21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:49:55.876938 containerd[1579]: time="2025-07-15T23:49:55.876869997Z" level=info msg="CreateContainer within sandbox \"626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31\"" Jul 15 23:49:55.878592 containerd[1579]: time="2025-07-15T23:49:55.877781290Z" level=info msg="StartContainer for \"21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31\"" Jul 15 23:49:55.880508 containerd[1579]: time="2025-07-15T23:49:55.880388455Z" level=info msg="connecting to shim 21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31" address="unix:///run/containerd/s/79df3206166de9a147fc4abb1290184d412c2c7ac8c9a8342269b71c184d7f4a" protocol=ttrpc version=3 Jul 15 23:49:55.931785 systemd[1]: Started cri-containerd-21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31.scope - libcontainer container 21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31. Jul 15 23:49:56.001816 systemd[1]: cri-containerd-21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31.scope: Deactivated successfully. Jul 15 23:49:56.007154 containerd[1579]: time="2025-07-15T23:49:56.007091879Z" level=info msg="StartContainer for \"21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31\" returns successfully" Jul 15 23:49:56.010411 containerd[1579]: time="2025-07-15T23:49:56.010182500Z" level=info msg="received exit event container_id:\"21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31\" id:\"21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31\" pid:4728 exited_at:{seconds:1752623396 nanos:8683051}" Jul 15 23:49:56.011232 containerd[1579]: time="2025-07-15T23:49:56.010971674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31\" id:\"21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31\" pid:4728 exited_at:{seconds:1752623396 nanos:8683051}" Jul 15 23:49:56.047253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21d6f5ffa282988fbab7a391fba9b27196ade03631f8fc491236d5c84c067c31-rootfs.mount: Deactivated successfully. Jul 15 23:49:56.835546 containerd[1579]: time="2025-07-15T23:49:56.835398055Z" level=info msg="CreateContainer within sandbox \"626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:49:56.851471 containerd[1579]: time="2025-07-15T23:49:56.851318008Z" level=info msg="Container 7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:49:56.866164 containerd[1579]: time="2025-07-15T23:49:56.866097071Z" level=info msg="CreateContainer within sandbox \"626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359\"" Jul 15 23:49:56.867495 containerd[1579]: time="2025-07-15T23:49:56.867302892Z" level=info msg="StartContainer for \"7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359\"" Jul 15 23:49:56.869174 containerd[1579]: time="2025-07-15T23:49:56.869127978Z" level=info msg="connecting to shim 7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359" address="unix:///run/containerd/s/79df3206166de9a147fc4abb1290184d412c2c7ac8c9a8342269b71c184d7f4a" protocol=ttrpc version=3 Jul 15 23:49:56.901751 systemd[1]: Started cri-containerd-7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359.scope - libcontainer container 7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359. Jul 15 23:49:56.978136 systemd[1]: cri-containerd-7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359.scope: Deactivated successfully. Jul 15 23:49:56.984474 containerd[1579]: time="2025-07-15T23:49:56.984298500Z" level=info msg="received exit event container_id:\"7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359\" id:\"7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359\" pid:4768 exited_at:{seconds:1752623396 nanos:982795885}" Jul 15 23:49:56.984779 containerd[1579]: time="2025-07-15T23:49:56.984323436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359\" id:\"7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359\" pid:4768 exited_at:{seconds:1752623396 nanos:982795885}" Jul 15 23:49:56.986772 containerd[1579]: time="2025-07-15T23:49:56.986632253Z" level=info msg="StartContainer for \"7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359\" returns successfully" Jul 15 23:49:57.028877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ddc004fcde814c55350b94589ff43e6e53db69ee97cbd0b204e88fa3c8f7359-rootfs.mount: Deactivated successfully. Jul 15 23:49:57.843073 containerd[1579]: time="2025-07-15T23:49:57.843010522Z" level=info msg="CreateContainer within sandbox \"626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:49:57.864490 containerd[1579]: time="2025-07-15T23:49:57.862523795Z" level=info msg="Container d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:49:57.883315 containerd[1579]: time="2025-07-15T23:49:57.883260569Z" level=info msg="CreateContainer within sandbox \"626c95375f3792874a70eaf7803196b44168aed41a844a4b56e3bf67b8e9f294\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66\"" Jul 15 23:49:57.885996 containerd[1579]: time="2025-07-15T23:49:57.885946228Z" level=info msg="StartContainer for \"d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66\"" Jul 15 23:49:57.887483 containerd[1579]: time="2025-07-15T23:49:57.887400452Z" level=info msg="connecting to shim d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66" address="unix:///run/containerd/s/79df3206166de9a147fc4abb1290184d412c2c7ac8c9a8342269b71c184d7f4a" protocol=ttrpc version=3 Jul 15 23:49:57.957338 systemd[1]: Started cri-containerd-d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66.scope - libcontainer container d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66. Jul 15 23:49:58.091758 containerd[1579]: time="2025-07-15T23:49:58.091704784Z" level=info msg="StartContainer for \"d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66\" returns successfully" Jul 15 23:49:58.202111 containerd[1579]: time="2025-07-15T23:49:58.202045507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66\" id:\"e6e1aa7fb82e5b1105a832c71a478ce2de63f42f83771c688a0dca52764d1f1c\" pid:4840 exited_at:{seconds:1752623398 nanos:201319858}" Jul 15 23:49:58.297709 containerd[1579]: time="2025-07-15T23:49:58.297637133Z" level=info msg="StopPodSandbox for \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\"" Jul 15 23:49:58.297936 containerd[1579]: time="2025-07-15T23:49:58.297891317Z" level=info msg="TearDown network for sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" successfully" Jul 15 23:49:58.298008 containerd[1579]: time="2025-07-15T23:49:58.297934002Z" level=info msg="StopPodSandbox for \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" returns successfully" Jul 15 23:49:58.299289 containerd[1579]: time="2025-07-15T23:49:58.299213886Z" level=info msg="RemovePodSandbox for \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\"" Jul 15 23:49:58.299289 containerd[1579]: time="2025-07-15T23:49:58.299279551Z" level=info msg="Forcibly stopping sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\"" Jul 15 23:49:58.299814 containerd[1579]: time="2025-07-15T23:49:58.299481144Z" level=info msg="TearDown network for sandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" successfully" Jul 15 23:49:58.302211 containerd[1579]: time="2025-07-15T23:49:58.302135700Z" level=info msg="Ensure that sandbox 6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15 in task-service has been cleanup successfully" Jul 15 23:49:58.306992 containerd[1579]: time="2025-07-15T23:49:58.306903303Z" level=info msg="RemovePodSandbox \"6bc6c518641b87d87951dda1ed9bdde8719a97bd58139791172f997d74e26c15\" returns successfully" Jul 15 23:49:58.308194 containerd[1579]: time="2025-07-15T23:49:58.308046394Z" level=info msg="StopPodSandbox for \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\"" Jul 15 23:49:58.308366 containerd[1579]: time="2025-07-15T23:49:58.308235973Z" level=info msg="TearDown network for sandbox \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\" successfully" Jul 15 23:49:58.308366 containerd[1579]: time="2025-07-15T23:49:58.308255827Z" level=info msg="StopPodSandbox for \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\" returns successfully" Jul 15 23:49:58.309267 containerd[1579]: time="2025-07-15T23:49:58.309226162Z" level=info msg="RemovePodSandbox for \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\"" Jul 15 23:49:58.309382 containerd[1579]: time="2025-07-15T23:49:58.309273828Z" level=info msg="Forcibly stopping sandbox \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\"" Jul 15 23:49:58.309441 containerd[1579]: time="2025-07-15T23:49:58.309421555Z" level=info msg="TearDown network for sandbox \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\" successfully" Jul 15 23:49:58.311690 containerd[1579]: time="2025-07-15T23:49:58.311648093Z" level=info msg="Ensure that sandbox f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e in task-service has been cleanup successfully" Jul 15 23:49:58.315907 containerd[1579]: time="2025-07-15T23:49:58.315790208Z" level=info msg="RemovePodSandbox \"f1576878cefcf3b05702f4b6661304f69af70cc231ba4b1fdfd4a2d4555df93e\" returns successfully" Jul 15 23:49:58.657763 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 15 23:49:58.874780 kubelet[2792]: I0715 23:49:58.874638 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rkxlz" podStartSLOduration=5.87461227 podStartE2EDuration="5.87461227s" podCreationTimestamp="2025-07-15 23:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:49:58.873773799 +0000 UTC m=+120.777387863" watchObservedRunningTime="2025-07-15 23:49:58.87461227 +0000 UTC m=+120.778226324" Jul 15 23:49:59.296261 containerd[1579]: time="2025-07-15T23:49:59.296189039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66\" id:\"f4f45e77fcfd2161bb0391407e37f6adae0a55a80dced8bc7d5d26e5cbb60cfc\" pid:4917 exit_status:1 exited_at:{seconds:1752623399 nanos:295041422}" Jul 15 23:50:01.482989 containerd[1579]: time="2025-07-15T23:50:01.482929027Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66\" id:\"dd8bfe202933f4174c9d7c43369a06a0bbd7c0f16f77770f4d2099ca13464be9\" pid:5191 exit_status:1 exited_at:{seconds:1752623401 nanos:482276101}" Jul 15 23:50:02.166752 systemd-networkd[1460]: lxc_health: Link UP Jul 15 23:50:02.180886 systemd-networkd[1460]: lxc_health: Gained carrier Jul 15 23:50:03.389763 systemd-networkd[1460]: lxc_health: Gained IPv6LL Jul 15 23:50:03.808162 containerd[1579]: time="2025-07-15T23:50:03.808087435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66\" id:\"ce7f50a506103bf8ea156d59a9eb978dfc89a61760cccaa9985b1313a1a314c4\" pid:5384 exited_at:{seconds:1752623403 nanos:806311433}" Jul 15 23:50:05.414938 ntpd[1535]: Listen normally on 14 lxc_health [fe80::c45c:28ff:fe5a:c937%14]:123 Jul 15 23:50:05.415706 ntpd[1535]: 15 Jul 23:50:05 ntpd[1535]: Listen normally on 14 lxc_health [fe80::c45c:28ff:fe5a:c937%14]:123 Jul 15 23:50:06.062381 containerd[1579]: time="2025-07-15T23:50:06.062324994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66\" id:\"4e59fa3cd3c02511024dd71300a2c6e58506d7a35715d6b3c37adadcbc716f0e\" pid:5418 exited_at:{seconds:1752623406 nanos:61202508}" Jul 15 23:50:08.373488 containerd[1579]: time="2025-07-15T23:50:08.372574502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7a22a0c2c77d8d0d954f3554d2524117f8a4623aae15d083c4f3d93b3e3dc66\" id:\"a81fed8eb99308c1072b9247b00d24a1d5d85071909ce768844f553c8a05e477\" pid:5447 exited_at:{seconds:1752623408 nanos:371358181}" Jul 15 23:50:08.427126 sshd[4664]: Connection closed by 139.178.89.65 port 53378 Jul 15 23:50:08.428786 sshd-session[4662]: pam_unix(sshd:session): session closed for user core Jul 15 23:50:08.441112 systemd[1]: sshd@33-10.128.0.95:22-139.178.89.65:53378.service: Deactivated successfully. Jul 15 23:50:08.442562 systemd-logind[1545]: Session 29 logged out. Waiting for processes to exit. Jul 15 23:50:08.450629 systemd[1]: session-29.scope: Deactivated successfully. Jul 15 23:50:08.460870 systemd-logind[1545]: Removed session 29.