Jul 7 00:18:44.199519 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:58:13 -00 2025 Jul 7 00:18:44.199598 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:18:44.199618 kernel: BIOS-provided physical RAM map: Jul 7 00:18:44.199632 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jul 7 00:18:44.199645 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jul 7 00:18:44.199659 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jul 7 00:18:44.199680 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jul 7 00:18:44.199695 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jul 7 00:18:44.199710 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd32afff] usable Jul 7 00:18:44.199725 kernel: BIOS-e820: [mem 0x00000000bd32b000-0x00000000bd332fff] ACPI data Jul 7 00:18:44.199739 kernel: BIOS-e820: [mem 0x00000000bd333000-0x00000000bf8ecfff] usable Jul 7 00:18:44.199753 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jul 7 00:18:44.199767 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jul 7 00:18:44.199782 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jul 7 00:18:44.199805 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jul 7 00:18:44.199820 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jul 7 00:18:44.199836 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jul 7 00:18:44.199852 kernel: NX (Execute Disable) protection: active Jul 7 00:18:44.199868 kernel: APIC: Static calls initialized Jul 7 00:18:44.199884 kernel: efi: EFI v2.7 by EDK II Jul 7 00:18:44.199900 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32b018 Jul 7 00:18:44.199916 kernel: random: crng init done Jul 7 00:18:44.199936 kernel: secureboot: Secure boot disabled Jul 7 00:18:44.199951 kernel: SMBIOS 2.4 present. Jul 7 00:18:44.199967 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Jul 7 00:18:44.199983 kernel: DMI: Memory slots populated: 1/1 Jul 7 00:18:44.199999 kernel: Hypervisor detected: KVM Jul 7 00:18:44.200015 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:18:44.200030 kernel: kvm-clock: using sched offset of 15721178916 cycles Jul 7 00:18:44.200048 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:18:44.200064 kernel: tsc: Detected 2299.998 MHz processor Jul 7 00:18:44.200087 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:18:44.200109 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:18:44.200125 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jul 7 00:18:44.200141 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jul 7 00:18:44.200158 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:18:44.200174 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jul 7 00:18:44.200191 kernel: Using GB pages for direct mapping Jul 7 00:18:44.200207 kernel: ACPI: Early table checksum verification disabled Jul 7 00:18:44.200223 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jul 7 00:18:44.200252 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jul 7 00:18:44.200270 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jul 7 00:18:44.200287 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jul 7 00:18:44.200303 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jul 7 00:18:44.200321 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Jul 7 00:18:44.200337 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jul 7 00:18:44.200359 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jul 7 00:18:44.200395 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jul 7 00:18:44.200413 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jul 7 00:18:44.200431 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jul 7 00:18:44.200448 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jul 7 00:18:44.200464 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jul 7 00:18:44.200480 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jul 7 00:18:44.200497 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jul 7 00:18:44.200512 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jul 7 00:18:44.200534 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jul 7 00:18:44.200550 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jul 7 00:18:44.200576 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jul 7 00:18:44.200593 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jul 7 00:18:44.200610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 7 00:18:44.200626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jul 7 00:18:44.200642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jul 7 00:18:44.200659 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Jul 7 00:18:44.200675 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Jul 7 00:18:44.200695 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] Jul 7 00:18:44.200713 kernel: Zone ranges: Jul 7 00:18:44.200729 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:18:44.200746 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 00:18:44.200762 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jul 7 00:18:44.200779 kernel: Device empty Jul 7 00:18:44.200797 kernel: Movable zone start for each node Jul 7 00:18:44.200814 kernel: Early memory node ranges Jul 7 00:18:44.200832 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jul 7 00:18:44.200855 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jul 7 00:18:44.200872 kernel: node 0: [mem 0x0000000000100000-0x00000000bd32afff] Jul 7 00:18:44.200890 kernel: node 0: [mem 0x00000000bd333000-0x00000000bf8ecfff] Jul 7 00:18:44.200907 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jul 7 00:18:44.200925 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jul 7 00:18:44.200943 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jul 7 00:18:44.200961 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:18:44.200979 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jul 7 00:18:44.200996 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jul 7 00:18:44.201019 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jul 7 00:18:44.201037 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 7 00:18:44.201055 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jul 7 00:18:44.201072 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 7 00:18:44.201091 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:18:44.201109 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:18:44.201127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:18:44.201145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:18:44.201162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:18:44.201182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:18:44.201198 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:18:44.201215 kernel: CPU topo: Max. logical packages: 1 Jul 7 00:18:44.201231 kernel: CPU topo: Max. logical dies: 1 Jul 7 00:18:44.201247 kernel: CPU topo: Max. dies per package: 1 Jul 7 00:18:44.201263 kernel: CPU topo: Max. threads per core: 2 Jul 7 00:18:44.201280 kernel: CPU topo: Num. cores per package: 1 Jul 7 00:18:44.201296 kernel: CPU topo: Num. threads per package: 2 Jul 7 00:18:44.201312 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 00:18:44.201343 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 7 00:18:44.201364 kernel: Booting paravirtualized kernel on KVM Jul 7 00:18:44.202433 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:18:44.202457 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 00:18:44.202473 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 00:18:44.202488 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 00:18:44.202502 kernel: pcpu-alloc: [0] 0 1 Jul 7 00:18:44.202515 kernel: kvm-guest: PV spinlocks enabled Jul 7 00:18:44.202531 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 00:18:44.202570 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:18:44.202587 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:18:44.202604 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 7 00:18:44.202621 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:18:44.202638 kernel: Fallback order for Node 0: 0 Jul 7 00:18:44.202655 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Jul 7 00:18:44.202671 kernel: Policy zone: Normal Jul 7 00:18:44.202684 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:18:44.202699 kernel: software IO TLB: area num 2. Jul 7 00:18:44.202730 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:18:44.202746 kernel: Kernel/User page tables isolation: enabled Jul 7 00:18:44.202765 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 00:18:44.202782 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 00:18:44.202797 kernel: Dynamic Preempt: voluntary Jul 7 00:18:44.202813 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:18:44.202832 kernel: rcu: RCU event tracing is enabled. Jul 7 00:18:44.202850 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:18:44.202868 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:18:44.202891 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:18:44.202909 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:18:44.202927 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:18:44.202945 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:18:44.202962 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:18:44.202979 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:18:44.202996 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:18:44.203017 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 00:18:44.203033 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:18:44.203050 kernel: Console: colour dummy device 80x25 Jul 7 00:18:44.203068 kernel: printk: legacy console [ttyS0] enabled Jul 7 00:18:44.203085 kernel: ACPI: Core revision 20240827 Jul 7 00:18:44.203103 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:18:44.203121 kernel: x2apic enabled Jul 7 00:18:44.203139 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:18:44.203157 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jul 7 00:18:44.203180 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 7 00:18:44.203199 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jul 7 00:18:44.203217 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jul 7 00:18:44.203236 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jul 7 00:18:44.203256 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:18:44.203273 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 7 00:18:44.203291 kernel: Spectre V2 : Mitigation: IBRS Jul 7 00:18:44.203308 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:18:44.203326 kernel: RETBleed: Mitigation: IBRS Jul 7 00:18:44.203349 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:18:44.203366 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jul 7 00:18:44.204445 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:18:44.204474 kernel: MDS: Mitigation: Clear CPU buffers Jul 7 00:18:44.204492 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 00:18:44.204510 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 00:18:44.204527 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:18:44.204543 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:18:44.204567 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:18:44.204590 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:18:44.204608 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 7 00:18:44.204626 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:18:44.204642 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:18:44.204660 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:18:44.204676 kernel: landlock: Up and running. Jul 7 00:18:44.204693 kernel: SELinux: Initializing. Jul 7 00:18:44.204711 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:18:44.204728 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:18:44.204749 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jul 7 00:18:44.204767 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jul 7 00:18:44.204784 kernel: signal: max sigframe size: 1776 Jul 7 00:18:44.204801 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:18:44.204820 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:18:44.204837 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:18:44.204855 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 00:18:44.204872 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:18:44.204890 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:18:44.204911 kernel: .... node #0, CPUs: #1 Jul 7 00:18:44.204930 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 7 00:18:44.204949 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 7 00:18:44.204966 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:18:44.204984 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 7 00:18:44.205002 kernel: Memory: 7564260K/7860552K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 290712K reserved, 0K cma-reserved) Jul 7 00:18:44.205020 kernel: devtmpfs: initialized Jul 7 00:18:44.205037 kernel: x86/mm: Memory block size: 128MB Jul 7 00:18:44.205060 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jul 7 00:18:44.205078 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:18:44.205100 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:18:44.205119 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:18:44.205136 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:18:44.205154 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:18:44.205172 kernel: audit: type=2000 audit(1751847518.737:1): state=initialized audit_enabled=0 res=1 Jul 7 00:18:44.205189 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:18:44.205207 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:18:44.205229 kernel: cpuidle: using governor menu Jul 7 00:18:44.205247 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:18:44.205265 kernel: dca service started, version 1.12.1 Jul 7 00:18:44.205282 kernel: PCI: Using configuration type 1 for base access Jul 7 00:18:44.205300 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:18:44.205317 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:18:44.205335 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:18:44.205353 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:18:44.205371 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:18:44.205408 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:18:44.205424 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:18:44.205440 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:18:44.205456 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 7 00:18:44.205472 kernel: ACPI: Interpreter enabled Jul 7 00:18:44.205488 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 00:18:44.205505 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:18:44.205523 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:18:44.205540 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 7 00:18:44.205567 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 7 00:18:44.205585 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:18:44.205896 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:18:44.206136 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 00:18:44.206317 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 00:18:44.206339 kernel: PCI host bridge to bus 0000:00 Jul 7 00:18:44.209643 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:18:44.209863 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:18:44.210032 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:18:44.210198 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jul 7 00:18:44.210356 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:18:44.210616 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:18:44.210822 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jul 7 00:18:44.211034 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 7 00:18:44.211223 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 7 00:18:44.213464 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Jul 7 00:18:44.213708 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 7 00:18:44.213892 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Jul 7 00:18:44.214078 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 00:18:44.214263 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Jul 7 00:18:44.214474 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Jul 7 00:18:44.214711 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 00:18:44.215008 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Jul 7 00:18:44.215202 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Jul 7 00:18:44.215226 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:18:44.215246 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:18:44.215265 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:18:44.215291 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:18:44.215309 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 00:18:44.215329 kernel: iommu: Default domain type: Translated Jul 7 00:18:44.215348 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:18:44.215367 kernel: efivars: Registered efivars operations Jul 7 00:18:44.215407 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:18:44.215426 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:18:44.215445 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jul 7 00:18:44.215464 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jul 7 00:18:44.215486 kernel: e820: reserve RAM buffer [mem 0xbd32b000-0xbfffffff] Jul 7 00:18:44.215505 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jul 7 00:18:44.215523 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jul 7 00:18:44.215542 kernel: vgaarb: loaded Jul 7 00:18:44.215560 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:18:44.215579 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:18:44.215599 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:18:44.215617 kernel: pnp: PnP ACPI init Jul 7 00:18:44.215636 kernel: pnp: PnP ACPI: found 7 devices Jul 7 00:18:44.215659 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:18:44.215678 kernel: NET: Registered PF_INET protocol family Jul 7 00:18:44.215698 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 00:18:44.215717 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 7 00:18:44.215736 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:18:44.215755 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:18:44.215774 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 00:18:44.215793 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 7 00:18:44.215820 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 00:18:44.215842 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 00:18:44.215861 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:18:44.215880 kernel: NET: Registered PF_XDP protocol family Jul 7 00:18:44.216062 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:18:44.216231 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:18:44.218147 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:18:44.218353 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jul 7 00:18:44.218587 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 00:18:44.218623 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:18:44.218640 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 00:18:44.218658 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jul 7 00:18:44.218676 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 00:18:44.218694 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 7 00:18:44.218713 kernel: clocksource: Switched to clocksource tsc Jul 7 00:18:44.218731 kernel: Initialise system trusted keyrings Jul 7 00:18:44.218748 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 7 00:18:44.218771 kernel: Key type asymmetric registered Jul 7 00:18:44.218788 kernel: Asymmetric key parser 'x509' registered Jul 7 00:18:44.218815 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 00:18:44.218834 kernel: io scheduler mq-deadline registered Jul 7 00:18:44.218852 kernel: io scheduler kyber registered Jul 7 00:18:44.218870 kernel: io scheduler bfq registered Jul 7 00:18:44.218888 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:18:44.218907 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 7 00:18:44.219119 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jul 7 00:18:44.219148 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 7 00:18:44.219343 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jul 7 00:18:44.219370 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 7 00:18:44.219584 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jul 7 00:18:44.219609 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:18:44.219628 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:18:44.219648 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 7 00:18:44.219667 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jul 7 00:18:44.219693 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jul 7 00:18:44.219912 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jul 7 00:18:44.219939 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:18:44.219956 kernel: i8042: Warning: Keylock active Jul 7 00:18:44.219975 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:18:44.219992 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:18:44.220188 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 7 00:18:44.220366 kernel: rtc_cmos 00:00: registered as rtc0 Jul 7 00:18:44.224046 kernel: rtc_cmos 00:00: setting system clock to 2025-07-07T00:18:43 UTC (1751847523) Jul 7 00:18:44.224233 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 7 00:18:44.224257 kernel: intel_pstate: CPU model not supported Jul 7 00:18:44.224275 kernel: pstore: Using crash dump compression: deflate Jul 7 00:18:44.224291 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 00:18:44.224308 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:18:44.224326 kernel: Segment Routing with IPv6 Jul 7 00:18:44.224343 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:18:44.224368 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:18:44.224407 kernel: Key type dns_resolver registered Jul 7 00:18:44.224424 kernel: IPI shorthand broadcast: enabled Jul 7 00:18:44.224440 kernel: sched_clock: Marking stable (3945005817, 991962288)->(5526896294, -589928189) Jul 7 00:18:44.224458 kernel: registered taskstats version 1 Jul 7 00:18:44.224477 kernel: Loading compiled-in X.509 certificates Jul 7 00:18:44.224495 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 025c05e23c9778f7a70ff09fb369dd949499fb06' Jul 7 00:18:44.224513 kernel: Demotion targets for Node 0: null Jul 7 00:18:44.224529 kernel: Key type .fscrypt registered Jul 7 00:18:44.224551 kernel: Key type fscrypt-provisioning registered Jul 7 00:18:44.224568 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:18:44.224584 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:18:44.224603 kernel: ima: No architecture policies found Jul 7 00:18:44.224621 kernel: clk: Disabling unused clocks Jul 7 00:18:44.224640 kernel: Warning: unable to open an initial console. Jul 7 00:18:44.224659 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 00:18:44.224675 kernel: Write protecting the kernel read-only data: 24576k Jul 7 00:18:44.224692 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 00:18:44.224713 kernel: Run /init as init process Jul 7 00:18:44.224731 kernel: with arguments: Jul 7 00:18:44.224749 kernel: /init Jul 7 00:18:44.224766 kernel: with environment: Jul 7 00:18:44.226421 kernel: HOME=/ Jul 7 00:18:44.226446 kernel: TERM=linux Jul 7 00:18:44.226463 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:18:44.226483 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:18:44.226514 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:18:44.226538 systemd[1]: Detected virtualization google. Jul 7 00:18:44.226557 systemd[1]: Detected architecture x86-64. Jul 7 00:18:44.226574 systemd[1]: Running in initrd. Jul 7 00:18:44.226589 systemd[1]: No hostname configured, using default hostname. Jul 7 00:18:44.226606 systemd[1]: Hostname set to . Jul 7 00:18:44.226623 systemd[1]: Initializing machine ID from random generator. Jul 7 00:18:44.226640 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:18:44.226664 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:18:44.226703 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:18:44.226728 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:18:44.226753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:18:44.226773 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:18:44.226798 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:18:44.226830 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:18:44.226850 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:18:44.226868 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:18:44.226887 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:18:44.226908 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:18:44.226928 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:18:44.226952 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:18:44.226972 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:18:44.226992 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:18:44.227010 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:18:44.227026 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:18:44.227044 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:18:44.227063 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:18:44.227082 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:18:44.227099 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:18:44.227124 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:18:44.227143 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:18:44.227163 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:18:44.227184 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:18:44.227206 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:18:44.227226 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:18:44.227246 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:18:44.227266 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:18:44.227289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:18:44.227308 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:18:44.227328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:18:44.227349 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:18:44.227471 systemd-journald[207]: Collecting audit messages is disabled. Jul 7 00:18:44.227527 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:18:44.227549 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:18:44.227666 systemd-journald[207]: Journal started Jul 7 00:18:44.227730 systemd-journald[207]: Runtime Journal (/run/log/journal/34ff9248c9b848c4b9285c7eccc4c398) is 8M, max 148.9M, 140.9M free. Jul 7 00:18:44.211798 systemd-modules-load[208]: Inserted module 'overlay' Jul 7 00:18:44.234408 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:18:44.240504 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:18:44.251598 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:18:44.262715 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:18:44.271574 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:18:44.271620 kernel: Bridge firewalling registered Jul 7 00:18:44.271025 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 7 00:18:44.284627 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:18:44.289091 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:18:44.298589 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:18:44.309723 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:18:44.317535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:18:44.326582 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:18:44.329476 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:18:44.329799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:18:44.342032 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:18:44.348766 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:18:44.375914 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:18:44.418733 systemd-resolved[245]: Positive Trust Anchors: Jul 7 00:18:44.418757 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:18:44.418827 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:18:44.425044 systemd-resolved[245]: Defaulting to hostname 'linux'. Jul 7 00:18:44.426896 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:18:44.444046 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:18:44.515433 kernel: SCSI subsystem initialized Jul 7 00:18:44.529422 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:18:44.542438 kernel: iscsi: registered transport (tcp) Jul 7 00:18:44.569427 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:18:44.569516 kernel: QLogic iSCSI HBA Driver Jul 7 00:18:44.594599 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:18:44.617615 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:18:44.621642 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:18:44.689044 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:18:44.695497 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:18:44.763463 kernel: raid6: avx2x4 gen() 17777 MB/s Jul 7 00:18:44.780430 kernel: raid6: avx2x2 gen() 17639 MB/s Jul 7 00:18:44.797940 kernel: raid6: avx2x1 gen() 13500 MB/s Jul 7 00:18:44.798043 kernel: raid6: using algorithm avx2x4 gen() 17777 MB/s Jul 7 00:18:44.816024 kernel: raid6: .... xor() 6906 MB/s, rmw enabled Jul 7 00:18:44.816133 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:18:44.840426 kernel: xor: automatically using best checksumming function avx Jul 7 00:18:45.029428 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:18:45.038281 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:18:45.042005 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:18:45.079827 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 7 00:18:45.089203 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:18:45.095097 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:18:45.131804 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jul 7 00:18:45.168022 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:18:45.170981 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:18:45.269424 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:18:45.276848 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:18:45.370404 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:18:45.395455 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 00:18:45.402429 kernel: AES CTR mode by8 optimization enabled Jul 7 00:18:45.417434 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Jul 7 00:18:45.563945 kernel: scsi host0: Virtio SCSI HBA Jul 7 00:18:45.580458 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jul 7 00:18:45.625404 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jul 7 00:18:45.625775 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jul 7 00:18:45.628845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:18:45.629567 kernel: sd 0:0:1:0: [sda] Write Protect is off Jul 7 00:18:45.629865 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jul 7 00:18:45.630093 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 00:18:45.629123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:18:45.635952 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:18:45.644065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:18:45.654050 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:18:45.654097 kernel: GPT:17805311 != 25165823 Jul 7 00:18:45.654123 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:18:45.654149 kernel: GPT:17805311 != 25165823 Jul 7 00:18:45.654174 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:18:45.654199 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:18:45.654230 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jul 7 00:18:45.656612 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:18:45.700442 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:18:45.755080 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jul 7 00:18:45.769865 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:18:45.784811 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jul 7 00:18:45.806333 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jul 7 00:18:45.806684 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jul 7 00:18:45.826915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 7 00:18:45.827302 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:18:45.836606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:18:45.842760 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:18:45.847200 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:18:45.865609 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:18:45.881015 disk-uuid[607]: Primary Header is updated. Jul 7 00:18:45.881015 disk-uuid[607]: Secondary Entries is updated. Jul 7 00:18:45.881015 disk-uuid[607]: Secondary Header is updated. Jul 7 00:18:45.897669 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:18:45.906406 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:18:45.940433 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:18:46.956522 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:18:46.957430 disk-uuid[608]: The operation has completed successfully. Jul 7 00:18:47.051862 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:18:47.052061 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:18:47.094998 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:18:47.117484 sh[629]: Success Jul 7 00:18:47.142434 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:18:47.143501 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:18:47.143532 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:18:47.157429 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 7 00:18:47.268250 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:18:47.274275 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:18:47.292227 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:18:47.314503 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:18:47.317464 kernel: BTRFS: device fsid 9d729180-1373-4e9f-840c-4db0e9220239 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (641) Jul 7 00:18:47.321347 kernel: BTRFS info (device dm-0): first mount of filesystem 9d729180-1373-4e9f-840c-4db0e9220239 Jul 7 00:18:47.321495 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:18:47.321521 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:18:47.352256 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:18:47.353821 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:18:47.357365 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:18:47.359200 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:18:47.373374 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:18:47.420469 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (674) Jul 7 00:18:47.425098 kernel: BTRFS info (device sda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:18:47.425202 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:18:47.425229 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:18:47.439419 kernel: BTRFS info (device sda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:18:47.441791 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:18:47.449633 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:18:47.554608 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:18:47.563642 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:18:47.652596 systemd-networkd[810]: lo: Link UP Jul 7 00:18:47.652610 systemd-networkd[810]: lo: Gained carrier Jul 7 00:18:47.655331 systemd-networkd[810]: Enumeration completed Jul 7 00:18:47.655537 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:18:47.656329 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:18:47.656337 systemd-networkd[810]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:18:47.657932 systemd[1]: Reached target network.target - Network. Jul 7 00:18:47.658457 systemd-networkd[810]: eth0: Link UP Jul 7 00:18:47.658464 systemd-networkd[810]: eth0: Gained carrier Jul 7 00:18:47.658485 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:18:47.683731 systemd-networkd[810]: eth0: DHCPv4 address 10.128.0.28/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 7 00:18:47.750194 ignition[738]: Ignition 2.21.0 Jul 7 00:18:47.750480 ignition[738]: Stage: fetch-offline Jul 7 00:18:47.750542 ignition[738]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:18:47.754370 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:18:47.750556 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:18:47.757755 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:18:47.750852 ignition[738]: parsed url from cmdline: "" Jul 7 00:18:47.750857 ignition[738]: no config URL provided Jul 7 00:18:47.750865 ignition[738]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:18:47.750875 ignition[738]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:18:47.750884 ignition[738]: failed to fetch config: resource requires networking Jul 7 00:18:47.751585 ignition[738]: Ignition finished successfully Jul 7 00:18:47.794087 ignition[821]: Ignition 2.21.0 Jul 7 00:18:47.794109 ignition[821]: Stage: fetch Jul 7 00:18:47.794341 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:18:47.794359 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:18:47.794626 ignition[821]: parsed url from cmdline: "" Jul 7 00:18:47.794636 ignition[821]: no config URL provided Jul 7 00:18:47.794649 ignition[821]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:18:47.794665 ignition[821]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:18:47.811026 unknown[821]: fetched base config from "system" Jul 7 00:18:47.794733 ignition[821]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jul 7 00:18:47.811041 unknown[821]: fetched base config from "system" Jul 7 00:18:47.799064 ignition[821]: GET result: OK Jul 7 00:18:47.811053 unknown[821]: fetched user config from "gcp" Jul 7 00:18:47.799188 ignition[821]: parsing config with SHA512: be7df9c3a1d7dbe9823775e2c367833da9c39155c0447a96256326899eca2b4cecdb468aba0dabaf2909e9dd4ec74e53345a55aa2e94a363e143c0853043d84f Jul 7 00:18:47.814751 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:18:47.811577 ignition[821]: fetch: fetch complete Jul 7 00:18:47.821970 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:18:47.811588 ignition[821]: fetch: fetch passed Jul 7 00:18:47.811670 ignition[821]: Ignition finished successfully Jul 7 00:18:47.861845 ignition[828]: Ignition 2.21.0 Jul 7 00:18:47.861866 ignition[828]: Stage: kargs Jul 7 00:18:47.862141 ignition[828]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:18:47.862162 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:18:47.867740 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:18:47.865052 ignition[828]: kargs: kargs passed Jul 7 00:18:47.871600 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:18:47.865208 ignition[828]: Ignition finished successfully Jul 7 00:18:47.905187 ignition[835]: Ignition 2.21.0 Jul 7 00:18:47.905205 ignition[835]: Stage: disks Jul 7 00:18:47.905505 ignition[835]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:18:47.910427 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:18:47.905525 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:18:47.914876 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:18:47.908102 ignition[835]: disks: disks passed Jul 7 00:18:47.920629 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:18:47.908193 ignition[835]: Ignition finished successfully Jul 7 00:18:47.925628 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:18:47.929597 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:18:47.934583 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:18:47.940159 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:18:47.989402 systemd-fsck[844]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 7 00:18:47.999406 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:18:48.005708 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:18:48.186425 kernel: EXT4-fs (sda9): mounted filesystem 98c55dfc-aac4-4fdd-8ec0-1f5587b3aa36 r/w with ordered data mode. Quota mode: none. Jul 7 00:18:48.187021 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:18:48.190306 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:18:48.195870 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:18:48.211615 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:18:48.214307 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 00:18:48.214425 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:18:48.214479 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:18:48.233741 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (852) Jul 7 00:18:48.236492 kernel: BTRFS info (device sda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:18:48.236574 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:18:48.238409 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:18:48.242473 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:18:48.244813 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:18:48.257163 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:18:48.370264 initrd-setup-root[876]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:18:48.382803 initrd-setup-root[883]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:18:48.392360 initrd-setup-root[890]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:18:48.400612 initrd-setup-root[897]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:18:48.567909 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:18:48.575233 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:18:48.579971 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:18:48.611224 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:18:48.612974 kernel: BTRFS info (device sda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:18:48.647358 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:18:48.651863 ignition[966]: INFO : Ignition 2.21.0 Jul 7 00:18:48.651863 ignition[966]: INFO : Stage: mount Jul 7 00:18:48.657576 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:18:48.657576 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:18:48.657576 ignition[966]: INFO : mount: mount passed Jul 7 00:18:48.657576 ignition[966]: INFO : Ignition finished successfully Jul 7 00:18:48.655080 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:18:48.665064 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:18:48.689807 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:18:48.721436 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (978) Jul 7 00:18:48.724981 kernel: BTRFS info (device sda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:18:48.725075 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:18:48.725107 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:18:48.735560 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:18:48.774081 ignition[995]: INFO : Ignition 2.21.0 Jul 7 00:18:48.774081 ignition[995]: INFO : Stage: files Jul 7 00:18:48.780551 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:18:48.780551 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:18:48.780551 ignition[995]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:18:48.780551 ignition[995]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:18:48.780551 ignition[995]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:18:48.798568 ignition[995]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:18:48.798568 ignition[995]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:18:48.798568 ignition[995]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:18:48.798568 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:18:48.798568 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 00:18:48.788878 unknown[995]: wrote ssh authorized keys file for user: core Jul 7 00:18:48.949172 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:18:49.168689 systemd-networkd[810]: eth0: Gained IPv6LL Jul 7 00:18:49.196537 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:18:49.201585 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:18:49.201585 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 00:18:49.516746 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 00:18:49.658300 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:18:49.663599 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:18:49.663599 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:18:49.663599 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:18:49.663599 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:18:49.663599 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:18:49.663599 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:18:49.663599 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:18:49.663599 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:18:49.695584 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:18:49.695584 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:18:49.695584 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:18:49.695584 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:18:49.695584 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:18:49.695584 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 00:18:50.128950 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 00:18:50.490968 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:18:50.490968 ignition[995]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 00:18:50.499609 ignition[995]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:18:50.499609 ignition[995]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:18:50.499609 ignition[995]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 00:18:50.499609 ignition[995]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:18:50.499609 ignition[995]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:18:50.499609 ignition[995]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:18:50.499609 ignition[995]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:18:50.499609 ignition[995]: INFO : files: files passed Jul 7 00:18:50.499609 ignition[995]: INFO : Ignition finished successfully Jul 7 00:18:50.502723 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:18:50.504909 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:18:50.515795 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:18:50.544173 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:18:50.544358 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:18:50.566609 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:18:50.566609 initrd-setup-root-after-ignition[1025]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:18:50.563754 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:18:50.585678 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:18:50.571077 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:18:50.577745 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:18:50.652842 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:18:50.653140 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:18:50.659427 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:18:50.663002 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:18:50.672840 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:18:50.674426 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:18:50.709995 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:18:50.712402 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:18:50.747295 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:18:50.747781 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:18:50.752798 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:18:50.757185 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:18:50.757546 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:18:50.766971 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:18:50.768827 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:18:50.776012 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:18:50.779000 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:18:50.782852 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:18:50.790775 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:18:50.795965 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:18:50.798974 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:18:50.804039 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:18:50.809260 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:18:50.813969 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:18:50.821858 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:18:50.822157 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:18:50.834686 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:18:50.841832 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:18:50.845762 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:18:50.845949 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:18:50.853941 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:18:50.854199 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:18:50.864004 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:18:50.864285 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:18:50.870848 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:18:50.871150 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:18:50.875999 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:18:50.892774 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:18:50.897697 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:18:50.898535 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:18:50.901395 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:18:50.903520 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:18:50.915697 ignition[1049]: INFO : Ignition 2.21.0 Jul 7 00:18:50.915697 ignition[1049]: INFO : Stage: umount Jul 7 00:18:50.923640 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:18:50.923640 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:18:50.923640 ignition[1049]: INFO : umount: umount passed Jul 7 00:18:50.923640 ignition[1049]: INFO : Ignition finished successfully Jul 7 00:18:50.927405 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:18:50.932177 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:18:50.946295 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:18:50.946439 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:18:50.955783 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:18:50.956594 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:18:50.956713 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:18:50.961452 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:18:50.961602 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:18:50.963481 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:18:50.963565 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:18:50.969684 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:18:50.969796 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:18:50.976707 systemd[1]: Stopped target network.target - Network. Jul 7 00:18:50.979819 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:18:50.979919 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:18:50.984896 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:18:50.989826 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:18:50.993533 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:18:50.994764 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:18:50.999819 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:18:51.004873 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:18:51.004938 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:18:51.009954 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:18:51.010185 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:18:51.014854 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:18:51.015101 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:18:51.020281 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:18:51.020372 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:18:51.027816 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:18:51.027966 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:18:51.031529 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:18:51.042798 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:18:51.046793 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:18:51.047589 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:18:51.056764 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:18:51.057119 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:18:51.057260 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:18:51.065996 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:18:51.067323 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:18:51.072653 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:18:51.072733 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:18:51.078887 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:18:51.093533 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:18:51.093678 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:18:51.102669 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:18:51.102764 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:18:51.109349 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:18:51.109795 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:18:51.115692 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:18:51.115808 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:18:51.123099 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:18:51.131981 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:18:51.132108 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:18:51.140049 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:18:51.140325 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:18:51.144435 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:18:51.144724 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:18:51.151648 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:18:51.151727 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:18:51.155925 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:18:51.156154 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:18:51.165661 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:18:51.165834 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:18:51.173575 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:18:51.173730 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:18:51.186796 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:18:51.195613 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:18:51.195744 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:18:51.200542 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:18:51.200674 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:18:51.210167 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 00:18:51.210539 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:18:51.220578 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:18:51.220677 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:18:51.227935 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:18:51.228093 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:18:51.237931 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:18:51.238007 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 7 00:18:51.238050 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:18:51.323616 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 7 00:18:51.238093 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:18:51.238669 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:18:51.238812 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:18:51.243371 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:18:51.243570 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:18:51.253904 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:18:51.259344 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:18:51.290994 systemd[1]: Switching root. Jul 7 00:18:51.347577 systemd-journald[207]: Journal stopped Jul 7 00:18:53.652372 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:18:53.652469 kernel: SELinux: policy capability open_perms=1 Jul 7 00:18:53.652491 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:18:53.652509 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:18:53.652527 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:18:53.652544 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:18:53.652568 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:18:53.652587 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:18:53.652618 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:18:53.652637 kernel: audit: type=1403 audit(1751847532.025:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:18:53.652659 systemd[1]: Successfully loaded SELinux policy in 57.419ms. Jul 7 00:18:53.652683 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.741ms. Jul 7 00:18:53.652705 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:18:53.652730 systemd[1]: Detected virtualization google. Jul 7 00:18:53.652751 systemd[1]: Detected architecture x86-64. Jul 7 00:18:53.652772 systemd[1]: Detected first boot. Jul 7 00:18:53.652793 systemd[1]: Initializing machine ID from random generator. Jul 7 00:18:53.652814 zram_generator::config[1093]: No configuration found. Jul 7 00:18:53.652840 kernel: Guest personality initialized and is inactive Jul 7 00:18:53.652859 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 00:18:53.652878 kernel: Initialized host personality Jul 7 00:18:53.652895 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:18:53.652915 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:18:53.652956 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:18:53.652978 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:18:53.653004 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:18:53.653026 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:18:53.653046 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:18:53.653068 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:18:53.653090 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:18:53.653109 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:18:53.653130 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:18:53.653157 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:18:53.653179 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:18:53.653201 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:18:53.653222 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:18:53.653244 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:18:53.653267 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:18:53.653290 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:18:53.653315 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:18:53.653344 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:18:53.653370 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:18:53.655464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:18:53.655500 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:18:53.655525 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:18:53.655549 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:18:53.655572 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:18:53.655596 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:18:53.655637 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:18:53.655658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:18:53.655679 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:18:53.655702 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:18:53.655725 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:18:53.655748 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:18:53.655770 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:18:53.655799 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:18:53.655823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:18:53.655846 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:18:53.655869 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:18:53.655890 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:18:53.655913 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:18:53.655941 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:18:53.655966 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:18:53.655989 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:18:53.656012 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:18:53.656035 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:18:53.656060 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:18:53.656083 systemd[1]: Reached target machines.target - Containers. Jul 7 00:18:53.656107 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:18:53.656135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:18:53.656159 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:18:53.656182 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:18:53.656204 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:18:53.656228 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:18:53.656253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:18:53.656275 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:18:53.656298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:18:53.656324 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:18:53.656351 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:18:53.656407 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:18:53.656430 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:18:53.656450 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:18:53.656473 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:18:53.656494 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:18:53.656516 kernel: loop: module loaded Jul 7 00:18:53.656536 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:18:53.656562 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:18:53.656583 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:18:53.656613 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:18:53.656635 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:18:53.656658 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:18:53.656680 systemd[1]: Stopped verity-setup.service. Jul 7 00:18:53.656704 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:18:53.656727 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:18:53.656753 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:18:53.656776 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:18:53.656799 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:18:53.656821 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:18:53.656843 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:18:53.656866 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:18:53.656889 kernel: ACPI: bus type drm_connector registered Jul 7 00:18:53.656910 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:18:53.656932 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:18:53.656960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:18:53.656982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:18:53.657005 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:18:53.657027 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:18:53.657048 kernel: fuse: init (API version 7.41) Jul 7 00:18:53.657069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:18:53.657091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:18:53.657114 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:18:53.657141 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:18:53.657164 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:18:53.657186 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:18:53.657208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:18:53.657231 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:18:53.657308 systemd-journald[1165]: Collecting audit messages is disabled. Jul 7 00:18:53.657361 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:18:53.660062 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:18:53.660115 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:18:53.660149 systemd-journald[1165]: Journal started Jul 7 00:18:53.660223 systemd-journald[1165]: Runtime Journal (/run/log/journal/fa33e27a30874fcda56de67155e4c260) is 8M, max 148.9M, 140.9M free. Jul 7 00:18:52.999271 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:18:53.024937 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 00:18:53.025714 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:18:53.678201 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:18:53.688217 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:18:53.693072 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:18:53.700497 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:18:53.711407 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:18:53.719634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:18:53.731449 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:18:53.738990 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:18:53.753420 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:18:53.763423 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:18:53.773615 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:18:53.784423 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:18:53.798453 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:18:53.811148 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:18:53.817241 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:18:53.827457 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:18:53.832886 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:18:53.837166 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:18:53.848136 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:18:53.864182 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:18:53.904461 kernel: loop0: detected capacity change from 0 to 224512 Jul 7 00:18:53.913652 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:18:53.921473 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:18:53.927596 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:18:53.933152 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:18:53.967479 systemd-journald[1165]: Time spent on flushing to /var/log/journal/fa33e27a30874fcda56de67155e4c260 is 58.841ms for 965 entries. Jul 7 00:18:53.967479 systemd-journald[1165]: System Journal (/var/log/journal/fa33e27a30874fcda56de67155e4c260) is 8M, max 584.8M, 576.8M free. Jul 7 00:18:54.054811 systemd-journald[1165]: Received client request to flush runtime journal. Jul 7 00:18:54.054905 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:18:54.054946 kernel: loop1: detected capacity change from 0 to 113872 Jul 7 00:18:53.978504 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:18:53.986024 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jul 7 00:18:53.986057 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jul 7 00:18:54.026496 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:18:54.035023 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:18:54.041488 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:18:54.058431 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:18:54.126187 kernel: loop2: detected capacity change from 0 to 146240 Jul 7 00:18:54.143998 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:18:54.154648 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:18:54.224420 kernel: loop3: detected capacity change from 0 to 52072 Jul 7 00:18:54.253696 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jul 7 00:18:54.253733 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jul 7 00:18:54.268697 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:18:54.305444 kernel: loop4: detected capacity change from 0 to 224512 Jul 7 00:18:54.345437 kernel: loop5: detected capacity change from 0 to 113872 Jul 7 00:18:54.410730 kernel: loop6: detected capacity change from 0 to 146240 Jul 7 00:18:54.468408 kernel: loop7: detected capacity change from 0 to 52072 Jul 7 00:18:54.503187 (sd-merge)[1241]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jul 7 00:18:54.504304 (sd-merge)[1241]: Merged extensions into '/usr'. Jul 7 00:18:54.517819 systemd[1]: Reload requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:18:54.518047 systemd[1]: Reloading... Jul 7 00:18:54.711835 zram_generator::config[1267]: No configuration found. Jul 7 00:18:54.974617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:18:55.035437 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:18:55.191210 systemd[1]: Reloading finished in 672 ms. Jul 7 00:18:55.213233 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:18:55.217880 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:18:55.238853 systemd[1]: Starting ensure-sysext.service... Jul 7 00:18:55.250585 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:18:55.287491 systemd[1]: Reload requested from client PID 1307 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:18:55.287516 systemd[1]: Reloading... Jul 7 00:18:55.317764 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:18:55.318259 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:18:55.318859 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:18:55.319440 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:18:55.324107 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:18:55.326181 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jul 7 00:18:55.326460 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jul 7 00:18:55.336087 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:18:55.337694 systemd-tmpfiles[1308]: Skipping /boot Jul 7 00:18:55.385749 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:18:55.387100 systemd-tmpfiles[1308]: Skipping /boot Jul 7 00:18:55.494493 zram_generator::config[1344]: No configuration found. Jul 7 00:18:55.627266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:18:55.749309 systemd[1]: Reloading finished in 461 ms. Jul 7 00:18:55.772398 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:18:55.799277 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:18:55.821721 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:18:55.835393 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:18:55.855469 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:18:55.877560 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:18:55.891524 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:18:55.909529 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:18:55.928720 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:18:55.929341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:18:55.933621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:18:55.955557 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:18:55.971147 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:18:55.980782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:18:55.981039 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:18:55.987321 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:18:55.993044 augenrules[1405]: No rules Jul 7 00:18:55.996593 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:18:55.998849 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:18:56.003213 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:18:56.015252 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:18:56.027833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:18:56.028797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:18:56.035845 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Jul 7 00:18:56.040723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:18:56.041126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:18:56.056154 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:18:56.056658 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:18:56.066670 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:18:56.100315 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:18:56.111653 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:18:56.112082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:18:56.119297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:18:56.131940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:18:56.150313 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:18:56.159688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:18:56.159944 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:18:56.164872 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:18:56.173614 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:18:56.173845 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:18:56.180806 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:18:56.193137 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:18:56.207905 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:18:56.208269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:18:56.220102 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:18:56.221479 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:18:56.234364 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:18:56.235508 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:18:56.246885 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:18:56.311341 systemd[1]: Finished ensure-sysext.service. Jul 7 00:18:56.312228 systemd-resolved[1386]: Positive Trust Anchors: Jul 7 00:18:56.312253 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:18:56.312319 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:18:56.331135 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:18:56.338527 systemd-resolved[1386]: Defaulting to hostname 'linux'. Jul 7 00:18:56.342688 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:18:56.351145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:18:56.355795 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:18:56.367798 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:18:56.381208 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:18:56.396316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:18:56.411766 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 7 00:18:56.419717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:18:56.419807 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:18:56.426206 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:18:56.435618 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:18:56.444601 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:18:56.444662 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:18:56.445188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:18:56.455221 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:18:56.456587 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:18:56.475705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:18:56.488255 augenrules[1454]: /sbin/augenrules: No change Jul 7 00:18:56.526656 augenrules[1488]: No rules Jul 7 00:18:56.528774 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:18:56.530478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:18:56.543485 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:18:56.544988 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:18:56.556248 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:18:56.557906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:18:56.572823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:18:56.574563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:18:56.586206 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 7 00:18:56.618032 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Jul 7 00:18:56.619832 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jul 7 00:18:56.634465 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jul 7 00:18:56.643583 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:18:56.643686 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:18:56.652756 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:18:56.662881 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:18:56.673575 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 00:18:56.683086 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:18:56.692831 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:18:56.703846 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:18:56.715639 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:18:56.715706 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:18:56.723968 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:18:56.736130 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:18:56.750103 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:18:56.766671 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:18:56.776425 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:18:56.785820 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:18:56.789705 systemd-networkd[1473]: lo: Link UP Jul 7 00:18:56.789725 systemd-networkd[1473]: lo: Gained carrier Jul 7 00:18:56.796645 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:18:56.800519 systemd-networkd[1473]: Enumeration completed Jul 7 00:18:56.806101 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:18:56.816918 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:18:56.817946 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:18:56.828415 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 7 00:18:56.845438 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 00:18:56.852437 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jul 7 00:18:56.859419 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:18:56.867862 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:18:56.887586 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:18:56.895445 systemd[1]: Reached target network.target - Network. Jul 7 00:18:56.906424 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 7 00:18:56.925866 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:18:56.932412 kernel: EDAC MC: Ver: 3.0.0 Jul 7 00:18:56.938630 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:18:56.943491 kernel: ACPI: button: Sleep Button [SLPF] Jul 7 00:18:56.952626 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:18:56.960855 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:18:56.960915 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:18:56.964747 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:18:56.977764 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:18:56.992711 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:18:57.010537 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:18:57.023313 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:18:57.025664 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:18:57.025761 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:18:57.028705 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 00:18:57.036078 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:18:57.070641 systemd[1]: Started ntpd.service - Network Time Service. Jul 7 00:18:57.082610 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:18:57.095346 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:18:57.109018 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:18:57.135195 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:18:57.146999 jq[1538]: false Jul 7 00:18:57.154775 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:18:57.180778 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:18:57.194206 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jul 7 00:18:57.197715 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:18:57.198738 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:18:57.209091 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:18:57.219920 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:18:57.228494 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:18:57.229051 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:18:57.230475 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:18:57.234966 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:18:57.238183 oslogin_cache_refresh[1540]: Refreshing passwd entry cache Jul 7 00:18:57.239917 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing passwd entry cache Jul 7 00:18:57.235843 systemd-networkd[1473]: eth0: Link UP Jul 7 00:18:57.236102 systemd-networkd[1473]: eth0: Gained carrier Jul 7 00:18:57.236142 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:18:57.243766 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:18:57.245500 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:18:57.263073 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:18:57.288107 systemd-networkd[1473]: eth0: DHCPv4 address 10.128.0.28/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 7 00:18:57.372077 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting users, quitting Jul 7 00:18:57.372077 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:18:57.372077 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing group entry cache Jul 7 00:18:57.362724 oslogin_cache_refresh[1540]: Failure getting users, quitting Jul 7 00:18:57.362754 oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:18:57.362852 oslogin_cache_refresh[1540]: Refreshing group entry cache Jul 7 00:18:57.400929 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:18:57.404308 oslogin_cache_refresh[1540]: Failure getting groups, quitting Jul 7 00:18:57.407410 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting groups, quitting Jul 7 00:18:57.407410 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:18:57.404331 oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:18:57.427987 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 00:18:57.428356 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 00:18:57.452720 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 7 00:18:57.479413 extend-filesystems[1539]: Found /dev/sda6 Jul 7 00:18:57.482563 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:18:57.486855 jq[1556]: true Jul 7 00:18:57.532277 extend-filesystems[1539]: Found /dev/sda9 Jul 7 00:18:57.540299 update_engine[1554]: I20250707 00:18:57.538177 1554 main.cc:92] Flatcar Update Engine starting Jul 7 00:18:57.540786 extend-filesystems[1539]: Checking size of /dev/sda9 Jul 7 00:18:57.560952 jq[1581]: true Jul 7 00:18:57.560473 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:18:57.563140 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:18:57.588629 coreos-metadata[1534]: Jul 07 00:18:57.588 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jul 7 00:18:57.593155 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:18:57.599706 coreos-metadata[1534]: Jul 07 00:18:57.599 INFO Fetch successful Jul 7 00:18:57.599706 coreos-metadata[1534]: Jul 07 00:18:57.599 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jul 7 00:18:57.604431 coreos-metadata[1534]: Jul 07 00:18:57.604 INFO Fetch successful Jul 7 00:18:57.604431 coreos-metadata[1534]: Jul 07 00:18:57.604 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jul 7 00:18:57.605773 coreos-metadata[1534]: Jul 07 00:18:57.605 INFO Fetch successful Jul 7 00:18:57.605773 coreos-metadata[1534]: Jul 07 00:18:57.605 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jul 7 00:18:57.622430 coreos-metadata[1534]: Jul 07 00:18:57.615 INFO Fetch successful Jul 7 00:18:57.634581 extend-filesystems[1539]: Resized partition /dev/sda9 Jul 7 00:18:57.657439 extend-filesystems[1596]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 00:18:57.684952 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:18:57.712538 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jul 7 00:18:57.750206 tar[1558]: linux-amd64/LICENSE Jul 7 00:18:57.750206 tar[1558]: linux-amd64/helm Jul 7 00:18:57.821536 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jul 7 00:18:57.886045 extend-filesystems[1596]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 7 00:18:57.886045 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 2 Jul 7 00:18:57.886045 extend-filesystems[1596]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jul 7 00:18:57.888465 extend-filesystems[1539]: Resized filesystem in /dev/sda9 Jul 7 00:18:57.887948 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:18:57.889508 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:18:57.925989 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:18:57.992974 dbus-daemon[1535]: [system] SELinux support is enabled Jul 7 00:18:58.009682 dbus-daemon[1535]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1473 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 00:18:58.018687 update_engine[1554]: I20250707 00:18:58.018607 1554 update_check_scheduler.cc:74] Next update check in 11m15s Jul 7 00:18:58.048800 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:18:58.063402 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:18:58.074291 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:18:58.085166 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:18:58.141781 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:18:58.148633 dbus-daemon[1535]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:18:58.152263 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:18:58.157938 systemd[1]: Starting sshkeys.service... Jul 7 00:18:58.163602 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:18:58.163671 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:18:58.181756 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 00:18:58.190589 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:18:58.190646 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:18:58.197252 ntpd[1542]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:42 UTC 2025 (1): Starting Jul 7 00:18:58.209855 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:42 UTC 2025 (1): Starting Jul 7 00:18:58.209855 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 00:18:58.209855 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: ---------------------------------------------------- Jul 7 00:18:58.209855 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: ntp-4 is maintained by Network Time Foundation, Jul 7 00:18:58.209855 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 00:18:58.209855 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: corporation. Support and training for ntp-4 are Jul 7 00:18:58.209855 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: available at https://www.nwtime.org/support Jul 7 00:18:58.209855 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: ---------------------------------------------------- Jul 7 00:18:58.202243 ntpd[1542]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 00:18:58.202259 ntpd[1542]: ---------------------------------------------------- Jul 7 00:18:58.202273 ntpd[1542]: ntp-4 is maintained by Network Time Foundation, Jul 7 00:18:58.202286 ntpd[1542]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 00:18:58.202299 ntpd[1542]: corporation. Support and training for ntp-4 are Jul 7 00:18:58.202312 ntpd[1542]: available at https://www.nwtime.org/support Jul 7 00:18:58.202325 ntpd[1542]: ---------------------------------------------------- Jul 7 00:18:58.219465 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:18:58.228002 ntpd[1542]: proto: precision = 0.092 usec (-23) Jul 7 00:18:58.228469 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: proto: precision = 0.092 usec (-23) Jul 7 00:18:58.234759 ntpd[1542]: basedate set to 2025-06-24 Jul 7 00:18:58.241693 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: basedate set to 2025-06-24 Jul 7 00:18:58.241693 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: gps base set to 2025-06-29 (week 2373) Jul 7 00:18:58.234799 ntpd[1542]: gps base set to 2025-06-29 (week 2373) Jul 7 00:18:58.262599 ntpd[1542]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 00:18:58.263075 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 00:18:58.263075 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 00:18:58.263075 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 00:18:58.263075 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: Listen normally on 3 eth0 10.128.0.28:123 Jul 7 00:18:58.262703 ntpd[1542]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 00:18:58.263314 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: Listen normally on 4 lo [::1]:123 Jul 7 00:18:58.263314 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: bind(21) AF_INET6 fe80::4001:aff:fe80:1c%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:18:58.263314 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1c%2#123 Jul 7 00:18:58.263314 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: failed to init interface for address fe80::4001:aff:fe80:1c%2 Jul 7 00:18:58.263314 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: Listening on routing socket on fd #21 for interface updates Jul 7 00:18:58.262982 ntpd[1542]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 00:18:58.263038 ntpd[1542]: Listen normally on 3 eth0 10.128.0.28:123 Jul 7 00:18:58.263098 ntpd[1542]: Listen normally on 4 lo [::1]:123 Jul 7 00:18:58.263171 ntpd[1542]: bind(21) AF_INET6 fe80::4001:aff:fe80:1c%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:18:58.263207 ntpd[1542]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1c%2#123 Jul 7 00:18:58.263239 ntpd[1542]: failed to init interface for address fe80::4001:aff:fe80:1c%2 Jul 7 00:18:58.263286 ntpd[1542]: Listening on routing socket on fd #21 for interface updates Jul 7 00:18:58.267322 ntpd[1542]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:18:58.270710 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:18:58.270710 ntpd[1542]: 7 Jul 00:18:58 ntpd[1542]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:18:58.267392 ntpd[1542]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:18:58.275958 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:18:58.290530 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:18:58.503942 coreos-metadata[1635]: Jul 07 00:18:58.502 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jul 7 00:18:58.503942 coreos-metadata[1635]: Jul 07 00:18:58.503 INFO Fetch failed with 404: resource not found Jul 7 00:18:58.503942 coreos-metadata[1635]: Jul 07 00:18:58.503 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jul 7 00:18:58.503942 coreos-metadata[1635]: Jul 07 00:18:58.503 INFO Fetch successful Jul 7 00:18:58.503942 coreos-metadata[1635]: Jul 07 00:18:58.503 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jul 7 00:18:58.503942 coreos-metadata[1635]: Jul 07 00:18:58.503 INFO Fetch failed with 404: resource not found Jul 7 00:18:58.503942 coreos-metadata[1635]: Jul 07 00:18:58.503 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jul 7 00:18:58.503942 coreos-metadata[1635]: Jul 07 00:18:58.503 INFO Fetch failed with 404: resource not found Jul 7 00:18:58.503942 coreos-metadata[1635]: Jul 07 00:18:58.503 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jul 7 00:18:58.503942 coreos-metadata[1635]: Jul 07 00:18:58.503 INFO Fetch successful Jul 7 00:18:58.505469 unknown[1635]: wrote ssh authorized keys file for user: core Jul 7 00:18:58.532523 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:18:58.544971 containerd[1559]: time="2025-07-07T00:18:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:18:58.547622 locksmithd[1633]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:18:58.561022 update-ssh-keys[1643]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:18:58.558157 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:18:58.562005 containerd[1559]: time="2025-07-07T00:18:58.561884942Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:18:58.574341 systemd[1]: Finished sshkeys.service. Jul 7 00:18:58.588473 systemd-logind[1548]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:18:58.588543 systemd-logind[1548]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 7 00:18:58.588580 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:18:58.589002 systemd-logind[1548]: New seat seat0. Jul 7 00:18:58.590496 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:18:58.628373 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:18:58.641917 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:18:58.649791 containerd[1559]: time="2025-07-07T00:18:58.647229976Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.432µs" Jul 7 00:18:58.649791 containerd[1559]: time="2025-07-07T00:18:58.647288851Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:18:58.649791 containerd[1559]: time="2025-07-07T00:18:58.647324355Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:18:58.652430 containerd[1559]: time="2025-07-07T00:18:58.650942317Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:18:58.652430 containerd[1559]: time="2025-07-07T00:18:58.651012315Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:18:58.652430 containerd[1559]: time="2025-07-07T00:18:58.651065737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:18:58.652430 containerd[1559]: time="2025-07-07T00:18:58.651265834Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:18:58.652430 containerd[1559]: time="2025-07-07T00:18:58.651298049Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:18:58.652430 containerd[1559]: time="2025-07-07T00:18:58.651795837Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:18:58.652430 containerd[1559]: time="2025-07-07T00:18:58.651824158Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:18:58.652430 containerd[1559]: time="2025-07-07T00:18:58.651842192Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:18:58.652430 containerd[1559]: time="2025-07-07T00:18:58.651856918Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:18:58.652430 containerd[1559]: time="2025-07-07T00:18:58.651977304Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:18:58.652430 containerd[1559]: time="2025-07-07T00:18:58.652286205Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:18:58.652947 containerd[1559]: time="2025-07-07T00:18:58.652338269Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:18:58.652947 containerd[1559]: time="2025-07-07T00:18:58.652356780Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:18:58.656664 containerd[1559]: time="2025-07-07T00:18:58.655672195Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:18:58.656664 containerd[1559]: time="2025-07-07T00:18:58.656070885Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:18:58.656664 containerd[1559]: time="2025-07-07T00:18:58.656278460Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:18:58.664008 containerd[1559]: time="2025-07-07T00:18:58.663943808Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:18:58.664149 containerd[1559]: time="2025-07-07T00:18:58.664050551Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:18:58.664149 containerd[1559]: time="2025-07-07T00:18:58.664077455Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:18:58.664149 containerd[1559]: time="2025-07-07T00:18:58.664096898Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:18:58.664149 containerd[1559]: time="2025-07-07T00:18:58.664115121Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:18:58.664149 containerd[1559]: time="2025-07-07T00:18:58.664131044Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:18:58.664370 containerd[1559]: time="2025-07-07T00:18:58.664151607Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:18:58.664370 containerd[1559]: time="2025-07-07T00:18:58.664181587Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:18:58.664370 containerd[1559]: time="2025-07-07T00:18:58.664207968Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:18:58.664370 containerd[1559]: time="2025-07-07T00:18:58.664226035Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:18:58.664370 containerd[1559]: time="2025-07-07T00:18:58.664244767Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:18:58.664370 containerd[1559]: time="2025-07-07T00:18:58.664274039Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:18:58.664652 containerd[1559]: time="2025-07-07T00:18:58.664507594Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:18:58.664652 containerd[1559]: time="2025-07-07T00:18:58.664546464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:18:58.664652 containerd[1559]: time="2025-07-07T00:18:58.664574584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:18:58.664652 containerd[1559]: time="2025-07-07T00:18:58.664594007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:18:58.664652 containerd[1559]: time="2025-07-07T00:18:58.664615441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:18:58.664652 containerd[1559]: time="2025-07-07T00:18:58.664633103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:18:58.664876 containerd[1559]: time="2025-07-07T00:18:58.664664052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:18:58.664876 containerd[1559]: time="2025-07-07T00:18:58.664683615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:18:58.664876 containerd[1559]: time="2025-07-07T00:18:58.664703833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:18:58.664876 containerd[1559]: time="2025-07-07T00:18:58.664722655Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:18:58.664876 containerd[1559]: time="2025-07-07T00:18:58.664751644Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:18:58.665072 containerd[1559]: time="2025-07-07T00:18:58.664903654Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:18:58.665072 containerd[1559]: time="2025-07-07T00:18:58.664935521Z" level=info msg="Start snapshots syncer" Jul 7 00:18:58.665072 containerd[1559]: time="2025-07-07T00:18:58.664967289Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:18:58.668394 containerd[1559]: time="2025-07-07T00:18:58.665342855Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:18:58.668394 containerd[1559]: time="2025-07-07T00:18:58.667568396Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.667735433Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.667967197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668006241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668026295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668047513Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668069362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668087792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668109685Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668160640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668180283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668204445Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668249102Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668275719Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:18:58.668738 containerd[1559]: time="2025-07-07T00:18:58.668292145Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:18:58.669347 containerd[1559]: time="2025-07-07T00:18:58.668308605Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:18:58.669347 containerd[1559]: time="2025-07-07T00:18:58.668324715Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:18:58.672626 containerd[1559]: time="2025-07-07T00:18:58.668343329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:18:58.672626 containerd[1559]: time="2025-07-07T00:18:58.670653404Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:18:58.672626 containerd[1559]: time="2025-07-07T00:18:58.670688947Z" level=info msg="runtime interface created" Jul 7 00:18:58.672626 containerd[1559]: time="2025-07-07T00:18:58.670700350Z" level=info msg="created NRI interface" Jul 7 00:18:58.672626 containerd[1559]: time="2025-07-07T00:18:58.670715863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:18:58.672626 containerd[1559]: time="2025-07-07T00:18:58.670743848Z" level=info msg="Connect containerd service" Jul 7 00:18:58.672626 containerd[1559]: time="2025-07-07T00:18:58.670801090Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:18:58.678559 containerd[1559]: time="2025-07-07T00:18:58.677115027Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:18:58.693080 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:18:58.695093 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:18:58.711697 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:18:58.757470 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 00:18:58.759868 dbus-daemon[1535]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 00:18:58.761254 dbus-daemon[1535]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1632 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 00:18:58.768905 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:18:58.816433 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:18:58.831609 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:18:58.843770 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 00:18:58.857533 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:18:58.868733 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:18:58.880083 systemd[1]: Started sshd@0-10.128.0.28:22-139.178.68.195:35898.service - OpenSSH per-connection server daemon (139.178.68.195:35898). Jul 7 00:18:59.059462 containerd[1559]: time="2025-07-07T00:18:59.058962505Z" level=info msg="Start subscribing containerd event" Jul 7 00:18:59.059462 containerd[1559]: time="2025-07-07T00:18:59.059050841Z" level=info msg="Start recovering state" Jul 7 00:18:59.059462 containerd[1559]: time="2025-07-07T00:18:59.059209912Z" level=info msg="Start event monitor" Jul 7 00:18:59.059462 containerd[1559]: time="2025-07-07T00:18:59.059231817Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:18:59.059462 containerd[1559]: time="2025-07-07T00:18:59.059246554Z" level=info msg="Start streaming server" Jul 7 00:18:59.059462 containerd[1559]: time="2025-07-07T00:18:59.059260338Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:18:59.059462 containerd[1559]: time="2025-07-07T00:18:59.059272429Z" level=info msg="runtime interface starting up..." Jul 7 00:18:59.059462 containerd[1559]: time="2025-07-07T00:18:59.059285154Z" level=info msg="starting plugins..." Jul 7 00:18:59.059462 containerd[1559]: time="2025-07-07T00:18:59.059305292Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:18:59.063428 containerd[1559]: time="2025-07-07T00:18:59.062828302Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:18:59.063428 containerd[1559]: time="2025-07-07T00:18:59.062933389Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:18:59.064282 containerd[1559]: time="2025-07-07T00:18:59.063794618Z" level=info msg="containerd successfully booted in 0.519887s" Jul 7 00:18:59.063961 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:18:59.146892 polkitd[1670]: Started polkitd version 126 Jul 7 00:18:59.153501 systemd-networkd[1473]: eth0: Gained IPv6LL Jul 7 00:18:59.155635 polkitd[1670]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 00:18:59.156369 polkitd[1670]: Loading rules from directory /run/polkit-1/rules.d Jul 7 00:18:59.156480 polkitd[1670]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 00:18:59.157035 polkitd[1670]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 7 00:18:59.157085 polkitd[1670]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 00:18:59.157139 polkitd[1670]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 00:18:59.157977 polkitd[1670]: Finished loading, compiling and executing 2 rules Jul 7 00:18:59.159550 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 00:18:59.161285 dbus-daemon[1535]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 00:18:59.162368 polkitd[1670]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 00:18:59.170727 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:18:59.184503 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:18:59.201750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:18:59.213534 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:18:59.225059 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jul 7 00:18:59.251792 systemd-hostnamed[1632]: Hostname set to (transient) Jul 7 00:18:59.256670 systemd-resolved[1386]: System hostname changed to 'ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal'. Jul 7 00:18:59.275391 init.sh[1691]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jul 7 00:18:59.275391 init.sh[1691]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jul 7 00:18:59.282908 init.sh[1691]: + /usr/bin/google_instance_setup Jul 7 00:18:59.329553 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:18:59.363077 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 35898 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:18:59.372345 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:59.397485 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:18:59.409633 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:18:59.459957 systemd-logind[1548]: New session 1 of user core. Jul 7 00:18:59.476303 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:18:59.484639 tar[1558]: linux-amd64/README.md Jul 7 00:18:59.505684 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:18:59.534546 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:18:59.553983 (systemd)[1707]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:18:59.559225 systemd-logind[1548]: New session c1 of user core. Jul 7 00:18:59.919887 systemd[1707]: Queued start job for default target default.target. Jul 7 00:18:59.925273 systemd[1707]: Created slice app.slice - User Application Slice. Jul 7 00:18:59.925337 systemd[1707]: Reached target paths.target - Paths. Jul 7 00:18:59.926060 systemd[1707]: Reached target timers.target - Timers. Jul 7 00:18:59.929808 systemd[1707]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:18:59.956064 systemd[1707]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:18:59.959459 systemd[1707]: Reached target sockets.target - Sockets. Jul 7 00:18:59.959672 systemd[1707]: Reached target basic.target - Basic System. Jul 7 00:18:59.959789 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:18:59.960005 systemd[1707]: Reached target default.target - Main User Target. Jul 7 00:18:59.960065 systemd[1707]: Startup finished in 379ms. Jul 7 00:18:59.976716 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:19:00.084185 instance-setup[1695]: INFO Running google_set_multiqueue. Jul 7 00:19:00.104670 instance-setup[1695]: INFO Set channels for eth0 to 2. Jul 7 00:19:00.110633 instance-setup[1695]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Jul 7 00:19:00.113364 instance-setup[1695]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Jul 7 00:19:00.115038 instance-setup[1695]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Jul 7 00:19:00.115813 instance-setup[1695]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Jul 7 00:19:00.116524 instance-setup[1695]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Jul 7 00:19:00.118988 instance-setup[1695]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Jul 7 00:19:00.120312 instance-setup[1695]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Jul 7 00:19:00.122693 instance-setup[1695]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Jul 7 00:19:00.139660 instance-setup[1695]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 7 00:19:00.153746 instance-setup[1695]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 7 00:19:00.162432 instance-setup[1695]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jul 7 00:19:00.162573 instance-setup[1695]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jul 7 00:19:00.220551 init.sh[1691]: + /usr/bin/google_metadata_script_runner --script-type startup Jul 7 00:19:00.231439 systemd[1]: Started sshd@1-10.128.0.28:22-139.178.68.195:35908.service - OpenSSH per-connection server daemon (139.178.68.195:35908). Jul 7 00:19:00.454093 startup-script[1749]: INFO Starting startup scripts. Jul 7 00:19:00.462102 startup-script[1749]: INFO No startup scripts found in metadata. Jul 7 00:19:00.462322 startup-script[1749]: INFO Finished running startup scripts. Jul 7 00:19:00.499298 init.sh[1691]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jul 7 00:19:00.501423 init.sh[1691]: + daemon_pids=() Jul 7 00:19:00.501423 init.sh[1691]: + for d in accounts clock_skew network Jul 7 00:19:00.501423 init.sh[1691]: + daemon_pids+=($!) Jul 7 00:19:00.501423 init.sh[1691]: + for d in accounts clock_skew network Jul 7 00:19:00.501423 init.sh[1691]: + daemon_pids+=($!) Jul 7 00:19:00.501423 init.sh[1691]: + for d in accounts clock_skew network Jul 7 00:19:00.501423 init.sh[1691]: + daemon_pids+=($!) Jul 7 00:19:00.501423 init.sh[1691]: + NOTIFY_SOCKET=/run/systemd/notify Jul 7 00:19:00.501423 init.sh[1691]: + /usr/bin/systemd-notify --ready Jul 7 00:19:00.502085 init.sh[1754]: + /usr/bin/google_accounts_daemon Jul 7 00:19:00.502877 init.sh[1755]: + /usr/bin/google_clock_skew_daemon Jul 7 00:19:00.503697 init.sh[1756]: + /usr/bin/google_network_daemon Jul 7 00:19:00.518982 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jul 7 00:19:00.534470 init.sh[1691]: + wait -n 1754 1755 1756 Jul 7 00:19:00.572517 sshd[1750]: Accepted publickey for core from 139.178.68.195 port 35908 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:00.576092 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:00.595536 systemd-logind[1548]: New session 2 of user core. Jul 7 00:19:00.599085 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:19:00.808673 sshd[1758]: Connection closed by 139.178.68.195 port 35908 Jul 7 00:19:00.813860 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:00.828224 systemd[1]: sshd@1-10.128.0.28:22-139.178.68.195:35908.service: Deactivated successfully. Jul 7 00:19:00.833176 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:19:00.835622 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:19:00.840190 systemd-logind[1548]: Removed session 2. Jul 7 00:19:00.871304 systemd[1]: Started sshd@2-10.128.0.28:22-139.178.68.195:35918.service - OpenSSH per-connection server daemon (139.178.68.195:35918). Jul 7 00:19:01.016045 google-clock-skew[1755]: INFO Starting Google Clock Skew daemon. Jul 7 00:19:01.035278 google-clock-skew[1755]: INFO Clock drift token has changed: 0. Jul 7 00:19:01.059441 google-networking[1756]: INFO Starting Google Networking daemon. Jul 7 00:19:01.098365 groupadd[1773]: group added to /etc/group: name=google-sudoers, GID=1000 Jul 7 00:19:01.106190 groupadd[1773]: group added to /etc/gshadow: name=google-sudoers Jul 7 00:19:01.173420 groupadd[1773]: new group: name=google-sudoers, GID=1000 Jul 7 00:19:01.208352 google-accounts[1754]: INFO Starting Google Accounts daemon. Jul 7 00:19:01.214101 ntpd[1542]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1c%2]:123 Jul 7 00:19:01.214619 ntpd[1542]: 7 Jul 00:19:01 ntpd[1542]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1c%2]:123 Jul 7 00:19:01.225434 sshd[1770]: Accepted publickey for core from 139.178.68.195 port 35918 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:01.228031 google-accounts[1754]: WARNING OS Login not installed. Jul 7 00:19:01.228766 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:01.231799 google-accounts[1754]: INFO Creating a new user account for 0. Jul 7 00:19:01.242553 systemd-logind[1548]: New session 3 of user core. Jul 7 00:19:01.243809 init.sh[1782]: useradd: invalid user name '0': use --badname to ignore Jul 7 00:19:01.245175 google-accounts[1754]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jul 7 00:19:01.246705 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:19:01.446449 sshd[1784]: Connection closed by 139.178.68.195 port 35918 Jul 7 00:19:01.447253 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:01.454947 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:19:01.455982 systemd[1]: sshd@2-10.128.0.28:22-139.178.68.195:35918.service: Deactivated successfully. Jul 7 00:19:01.459839 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:19:01.463642 systemd-logind[1548]: Removed session 3. Jul 7 00:19:02.000713 systemd-resolved[1386]: Clock change detected. Flushing caches. Jul 7 00:19:02.001613 google-clock-skew[1755]: INFO Synced system time with hardware clock. Jul 7 00:19:02.057835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:19:02.071335 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:19:02.076784 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:19:02.081546 systemd[1]: Startup finished in 4.158s (kernel) + 8.194s (initrd) + 9.739s (userspace) = 22.092s. Jul 7 00:19:03.101356 kubelet[1794]: E0707 00:19:03.101273 1794 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:19:03.104278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:19:03.104558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:19:03.105276 systemd[1]: kubelet.service: Consumed 1.375s CPU time, 263.3M memory peak. Jul 7 00:19:11.880253 systemd[1]: Started sshd@3-10.128.0.28:22-139.178.68.195:53496.service - OpenSSH per-connection server daemon (139.178.68.195:53496). Jul 7 00:19:12.189065 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 53496 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:12.191131 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:12.199620 systemd-logind[1548]: New session 4 of user core. Jul 7 00:19:12.206202 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:19:12.403130 sshd[1808]: Connection closed by 139.178.68.195 port 53496 Jul 7 00:19:12.404221 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:12.411065 systemd[1]: sshd@3-10.128.0.28:22-139.178.68.195:53496.service: Deactivated successfully. Jul 7 00:19:12.414003 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:19:12.415354 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:19:12.417855 systemd-logind[1548]: Removed session 4. Jul 7 00:19:12.459419 systemd[1]: Started sshd@4-10.128.0.28:22-139.178.68.195:53508.service - OpenSSH per-connection server daemon (139.178.68.195:53508). Jul 7 00:19:12.784280 sshd[1814]: Accepted publickey for core from 139.178.68.195 port 53508 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:12.786152 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:12.794207 systemd-logind[1548]: New session 5 of user core. Jul 7 00:19:12.797086 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:19:12.993198 sshd[1816]: Connection closed by 139.178.68.195 port 53508 Jul 7 00:19:12.994162 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:13.000865 systemd[1]: sshd@4-10.128.0.28:22-139.178.68.195:53508.service: Deactivated successfully. Jul 7 00:19:13.003616 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:19:13.004751 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:19:13.007052 systemd-logind[1548]: Removed session 5. Jul 7 00:19:13.055607 systemd[1]: Started sshd@5-10.128.0.28:22-139.178.68.195:53510.service - OpenSSH per-connection server daemon (139.178.68.195:53510). Jul 7 00:19:13.313160 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:19:13.318109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:19:13.377437 sshd[1822]: Accepted publickey for core from 139.178.68.195 port 53510 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:13.379208 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:13.386947 systemd-logind[1548]: New session 6 of user core. Jul 7 00:19:13.398170 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:19:13.596555 sshd[1827]: Connection closed by 139.178.68.195 port 53510 Jul 7 00:19:13.597579 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:13.604358 systemd[1]: sshd@5-10.128.0.28:22-139.178.68.195:53510.service: Deactivated successfully. Jul 7 00:19:13.607123 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:19:13.610313 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:19:13.614389 systemd-logind[1548]: Removed session 6. Jul 7 00:19:13.652754 systemd[1]: Started sshd@6-10.128.0.28:22-139.178.68.195:53526.service - OpenSSH per-connection server daemon (139.178.68.195:53526). Jul 7 00:19:13.688482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:19:13.709546 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:19:13.772212 kubelet[1839]: E0707 00:19:13.772140 1839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:19:13.776893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:19:13.777144 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:19:13.777725 systemd[1]: kubelet.service: Consumed 229ms CPU time, 108.3M memory peak. Jul 7 00:19:13.971087 sshd[1833]: Accepted publickey for core from 139.178.68.195 port 53526 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:13.973069 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:13.981153 systemd-logind[1548]: New session 7 of user core. Jul 7 00:19:13.987129 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:19:14.168428 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:19:14.169041 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:19:14.188007 sudo[1849]: pam_unix(sudo:session): session closed for user root Jul 7 00:19:14.231041 sshd[1848]: Connection closed by 139.178.68.195 port 53526 Jul 7 00:19:14.232600 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:14.239328 systemd[1]: sshd@6-10.128.0.28:22-139.178.68.195:53526.service: Deactivated successfully. Jul 7 00:19:14.241851 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:19:14.243155 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:19:14.245535 systemd-logind[1548]: Removed session 7. Jul 7 00:19:14.295743 systemd[1]: Started sshd@7-10.128.0.28:22-139.178.68.195:53542.service - OpenSSH per-connection server daemon (139.178.68.195:53542). Jul 7 00:19:14.607781 sshd[1855]: Accepted publickey for core from 139.178.68.195 port 53542 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:14.609406 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:14.616885 systemd-logind[1548]: New session 8 of user core. Jul 7 00:19:14.624230 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:19:14.785898 sudo[1859]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:19:14.786405 sudo[1859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:19:14.794037 sudo[1859]: pam_unix(sudo:session): session closed for user root Jul 7 00:19:14.808727 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:19:14.809239 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:19:14.822267 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:19:14.875301 augenrules[1881]: No rules Jul 7 00:19:14.877626 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:19:14.878025 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:19:14.880145 sudo[1858]: pam_unix(sudo:session): session closed for user root Jul 7 00:19:14.922515 sshd[1857]: Connection closed by 139.178.68.195 port 53542 Jul 7 00:19:14.923525 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:14.928849 systemd[1]: sshd@7-10.128.0.28:22-139.178.68.195:53542.service: Deactivated successfully. Jul 7 00:19:14.931460 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:19:14.935461 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:19:14.937060 systemd-logind[1548]: Removed session 8. Jul 7 00:19:14.977620 systemd[1]: Started sshd@8-10.128.0.28:22-139.178.68.195:53546.service - OpenSSH per-connection server daemon (139.178.68.195:53546). Jul 7 00:19:15.296005 sshd[1890]: Accepted publickey for core from 139.178.68.195 port 53546 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:15.297996 sshd-session[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:15.305568 systemd-logind[1548]: New session 9 of user core. Jul 7 00:19:15.313114 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:19:15.476729 sudo[1893]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:19:15.477253 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:19:16.024296 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:19:16.036606 (dockerd)[1912]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:19:16.388213 dockerd[1912]: time="2025-07-07T00:19:16.388045693Z" level=info msg="Starting up" Jul 7 00:19:16.393136 dockerd[1912]: time="2025-07-07T00:19:16.393082009Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:19:16.481242 dockerd[1912]: time="2025-07-07T00:19:16.481185648Z" level=info msg="Loading containers: start." Jul 7 00:19:16.501885 kernel: Initializing XFRM netlink socket Jul 7 00:19:16.874058 systemd-networkd[1473]: docker0: Link UP Jul 7 00:19:16.883237 dockerd[1912]: time="2025-07-07T00:19:16.883167421Z" level=info msg="Loading containers: done." Jul 7 00:19:16.907859 dockerd[1912]: time="2025-07-07T00:19:16.905936372Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:19:16.907859 dockerd[1912]: time="2025-07-07T00:19:16.906064878Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:19:16.907859 dockerd[1912]: time="2025-07-07T00:19:16.906233787Z" level=info msg="Initializing buildkit" Jul 7 00:19:16.908444 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2735555335-merged.mount: Deactivated successfully. Jul 7 00:19:16.947508 dockerd[1912]: time="2025-07-07T00:19:16.947451830Z" level=info msg="Completed buildkit initialization" Jul 7 00:19:16.959293 dockerd[1912]: time="2025-07-07T00:19:16.959207568Z" level=info msg="Daemon has completed initialization" Jul 7 00:19:16.959611 dockerd[1912]: time="2025-07-07T00:19:16.959312610Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:19:16.959663 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:19:17.964766 containerd[1559]: time="2025-07-07T00:19:17.964711868Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 00:19:18.588520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4118397043.mount: Deactivated successfully. Jul 7 00:19:20.272974 containerd[1559]: time="2025-07-07T00:19:20.272896933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:20.274676 containerd[1559]: time="2025-07-07T00:19:20.274610220Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28805673" Jul 7 00:19:20.277122 containerd[1559]: time="2025-07-07T00:19:20.277015426Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:20.281291 containerd[1559]: time="2025-07-07T00:19:20.281202049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:20.283265 containerd[1559]: time="2025-07-07T00:19:20.283010177Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.318244469s" Jul 7 00:19:20.283265 containerd[1559]: time="2025-07-07T00:19:20.283074007Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 00:19:20.284187 containerd[1559]: time="2025-07-07T00:19:20.284155875Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 00:19:21.947146 containerd[1559]: time="2025-07-07T00:19:21.947060252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:21.948741 containerd[1559]: time="2025-07-07T00:19:21.948682775Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24785846" Jul 7 00:19:21.950364 containerd[1559]: time="2025-07-07T00:19:21.950310132Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:21.954722 containerd[1559]: time="2025-07-07T00:19:21.954641919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:21.956388 containerd[1559]: time="2025-07-07T00:19:21.956130577Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.671794681s" Jul 7 00:19:21.956388 containerd[1559]: time="2025-07-07T00:19:21.956185651Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 00:19:21.957534 containerd[1559]: time="2025-07-07T00:19:21.957220117Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 00:19:23.312544 containerd[1559]: time="2025-07-07T00:19:23.312465891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:23.314116 containerd[1559]: time="2025-07-07T00:19:23.314038956Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19178832" Jul 7 00:19:23.315731 containerd[1559]: time="2025-07-07T00:19:23.315663970Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:23.319207 containerd[1559]: time="2025-07-07T00:19:23.319131587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:23.320667 containerd[1559]: time="2025-07-07T00:19:23.320476604Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.363212098s" Jul 7 00:19:23.320667 containerd[1559]: time="2025-07-07T00:19:23.320527515Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 00:19:23.321097 containerd[1559]: time="2025-07-07T00:19:23.321070157Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 00:19:23.828013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:19:23.834124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:19:24.320721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:19:24.333702 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:19:24.422656 kubelet[2186]: E0707 00:19:24.422576 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:19:24.426930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:19:24.427188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:19:24.428056 systemd[1]: kubelet.service: Consumed 267ms CPU time, 109.9M memory peak. Jul 7 00:19:24.709428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119915697.mount: Deactivated successfully. Jul 7 00:19:25.404330 containerd[1559]: time="2025-07-07T00:19:25.404235974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:25.406014 containerd[1559]: time="2025-07-07T00:19:25.405953617Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30897258" Jul 7 00:19:25.407642 containerd[1559]: time="2025-07-07T00:19:25.407561385Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:25.410435 containerd[1559]: time="2025-07-07T00:19:25.410355615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:25.411519 containerd[1559]: time="2025-07-07T00:19:25.411278037Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.09006465s" Jul 7 00:19:25.411519 containerd[1559]: time="2025-07-07T00:19:25.411340186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 00:19:25.412300 containerd[1559]: time="2025-07-07T00:19:25.412172467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:19:25.871902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530813180.mount: Deactivated successfully. Jul 7 00:19:27.069624 containerd[1559]: time="2025-07-07T00:19:27.069553562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:27.071288 containerd[1559]: time="2025-07-07T00:19:27.071168219Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Jul 7 00:19:27.072395 containerd[1559]: time="2025-07-07T00:19:27.072299056Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:27.079271 containerd[1559]: time="2025-07-07T00:19:27.079150321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:27.080974 containerd[1559]: time="2025-07-07T00:19:27.080724577Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.668409484s" Jul 7 00:19:27.080974 containerd[1559]: time="2025-07-07T00:19:27.080783142Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:19:27.081382 containerd[1559]: time="2025-07-07T00:19:27.081350441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:19:27.460894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114782230.mount: Deactivated successfully. Jul 7 00:19:27.468442 containerd[1559]: time="2025-07-07T00:19:27.468349800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:19:27.469626 containerd[1559]: time="2025-07-07T00:19:27.469560366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jul 7 00:19:27.471257 containerd[1559]: time="2025-07-07T00:19:27.471196250Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:19:27.474576 containerd[1559]: time="2025-07-07T00:19:27.474510401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:19:27.475817 containerd[1559]: time="2025-07-07T00:19:27.475468725Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 393.935881ms" Jul 7 00:19:27.475817 containerd[1559]: time="2025-07-07T00:19:27.475517052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:19:27.476242 containerd[1559]: time="2025-07-07T00:19:27.476176294Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 00:19:27.894030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1213089118.mount: Deactivated successfully. Jul 7 00:19:29.659096 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 00:19:30.259313 containerd[1559]: time="2025-07-07T00:19:30.259238694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:30.260776 containerd[1559]: time="2025-07-07T00:19:30.260712320Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57557924" Jul 7 00:19:30.262885 containerd[1559]: time="2025-07-07T00:19:30.262786341Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:30.267356 containerd[1559]: time="2025-07-07T00:19:30.267250345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:30.269123 containerd[1559]: time="2025-07-07T00:19:30.268889412Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.792667631s" Jul 7 00:19:30.269123 containerd[1559]: time="2025-07-07T00:19:30.268946522Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 00:19:33.710901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:19:33.711219 systemd[1]: kubelet.service: Consumed 267ms CPU time, 109.9M memory peak. Jul 7 00:19:33.714999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:19:33.760628 systemd[1]: Reload requested from client PID 2337 ('systemctl') (unit session-9.scope)... Jul 7 00:19:33.760665 systemd[1]: Reloading... Jul 7 00:19:33.905126 zram_generator::config[2377]: No configuration found. Jul 7 00:19:34.108540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:19:34.314863 systemd[1]: Reloading finished in 553 ms. Jul 7 00:19:34.358655 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:19:34.359334 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:19:34.360140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:19:34.360238 systemd[1]: kubelet.service: Consumed 154ms CPU time, 94.2M memory peak. Jul 7 00:19:34.366252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:19:34.961451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:19:34.973694 (kubelet)[2429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:19:35.035020 kubelet[2429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:19:35.035020 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:19:35.035020 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:19:35.035584 kubelet[2429]: I0707 00:19:35.035101 2429 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:19:35.495815 kubelet[2429]: I0707 00:19:35.495721 2429 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:19:35.495815 kubelet[2429]: I0707 00:19:35.495765 2429 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:19:35.496280 kubelet[2429]: I0707 00:19:35.496235 2429 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:19:35.546932 kubelet[2429]: E0707 00:19:35.546871 2429 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:19:35.550932 kubelet[2429]: I0707 00:19:35.550680 2429 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:19:35.569314 kubelet[2429]: I0707 00:19:35.569277 2429 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:19:35.574198 kubelet[2429]: I0707 00:19:35.574150 2429 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:19:35.575760 kubelet[2429]: I0707 00:19:35.575692 2429 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:19:35.576025 kubelet[2429]: I0707 00:19:35.575748 2429 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:19:35.576219 kubelet[2429]: I0707 00:19:35.576030 2429 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:19:35.576219 kubelet[2429]: I0707 00:19:35.576048 2429 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:19:35.576326 kubelet[2429]: I0707 00:19:35.576223 2429 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:19:35.582438 kubelet[2429]: I0707 00:19:35.582329 2429 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:19:35.585534 kubelet[2429]: I0707 00:19:35.585007 2429 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:19:35.585534 kubelet[2429]: I0707 00:19:35.585092 2429 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:19:35.585534 kubelet[2429]: I0707 00:19:35.585159 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:19:35.590829 kubelet[2429]: I0707 00:19:35.590054 2429 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:19:35.590829 kubelet[2429]: I0707 00:19:35.590733 2429 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:19:35.592276 kubelet[2429]: W0707 00:19:35.592234 2429 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:19:35.596223 kubelet[2429]: I0707 00:19:35.595454 2429 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:19:35.596223 kubelet[2429]: I0707 00:19:35.595637 2429 server.go:1287] "Started kubelet" Jul 7 00:19:35.596223 kubelet[2429]: W0707 00:19:35.595941 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Jul 7 00:19:35.596223 kubelet[2429]: E0707 00:19:35.596023 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:19:35.604782 kubelet[2429]: W0707 00:19:35.604045 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Jul 7 00:19:35.604782 kubelet[2429]: E0707 00:19:35.604134 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:19:35.604782 kubelet[2429]: I0707 00:19:35.604200 2429 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:19:35.607531 kubelet[2429]: I0707 00:19:35.607192 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:19:35.608852 kubelet[2429]: I0707 00:19:35.608733 2429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:19:35.609349 kubelet[2429]: I0707 00:19:35.609298 2429 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:19:35.612832 kubelet[2429]: E0707 00:19:35.609686 2429 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal.184fd0169bffb896 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,UID:ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,},FirstTimestamp:2025-07-07 00:19:35.595497622 +0000 UTC m=+0.615168971,LastTimestamp:2025-07-07 00:19:35.595497622 +0000 UTC m=+0.615168971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,}" Jul 7 00:19:35.615970 kubelet[2429]: I0707 00:19:35.615936 2429 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:19:35.618096 kubelet[2429]: I0707 00:19:35.617366 2429 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:19:35.620877 kubelet[2429]: I0707 00:19:35.620839 2429 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:19:35.621229 kubelet[2429]: E0707 00:19:35.621198 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" Jul 7 00:19:35.622282 kubelet[2429]: E0707 00:19:35.622244 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.28:6443: connect: connection refused" interval="200ms" Jul 7 00:19:35.624480 kubelet[2429]: I0707 00:19:35.624402 2429 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:19:35.624851 kubelet[2429]: I0707 00:19:35.624786 2429 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:19:35.625971 kubelet[2429]: I0707 00:19:35.625942 2429 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:19:35.626070 kubelet[2429]: I0707 00:19:35.626015 2429 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:19:35.628286 kubelet[2429]: W0707 00:19:35.627727 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Jul 7 00:19:35.628506 kubelet[2429]: E0707 00:19:35.628476 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:19:35.628813 kubelet[2429]: E0707 00:19:35.628775 2429 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:19:35.629103 kubelet[2429]: I0707 00:19:35.629084 2429 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:19:35.650152 kubelet[2429]: I0707 00:19:35.650072 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:19:35.653454 kubelet[2429]: I0707 00:19:35.652836 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:19:35.653454 kubelet[2429]: I0707 00:19:35.652977 2429 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:19:35.653454 kubelet[2429]: I0707 00:19:35.653013 2429 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:19:35.653454 kubelet[2429]: I0707 00:19:35.653026 2429 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:19:35.653454 kubelet[2429]: E0707 00:19:35.653119 2429 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:19:35.665515 kubelet[2429]: W0707 00:19:35.665432 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Jul 7 00:19:35.665677 kubelet[2429]: E0707 00:19:35.665569 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:19:35.672091 kubelet[2429]: I0707 00:19:35.672055 2429 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:19:35.672091 kubelet[2429]: I0707 00:19:35.672085 2429 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:19:35.672325 kubelet[2429]: I0707 00:19:35.672120 2429 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:19:35.675403 kubelet[2429]: I0707 00:19:35.675080 2429 policy_none.go:49] "None policy: Start" Jul 7 00:19:35.675403 kubelet[2429]: I0707 00:19:35.675112 2429 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:19:35.675403 kubelet[2429]: I0707 00:19:35.675126 2429 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:19:35.685770 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:19:35.700777 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:19:35.706431 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:19:35.719319 kubelet[2429]: I0707 00:19:35.719281 2429 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:19:35.720214 kubelet[2429]: I0707 00:19:35.720114 2429 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:19:35.720306 kubelet[2429]: I0707 00:19:35.720141 2429 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:19:35.721146 kubelet[2429]: I0707 00:19:35.721116 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:19:35.723832 kubelet[2429]: E0707 00:19:35.723780 2429 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:19:35.724304 kubelet[2429]: E0707 00:19:35.724238 2429 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" Jul 7 00:19:35.783511 systemd[1]: Created slice kubepods-burstable-pod81a2435b94a5a184fd13b479a3f79521.slice - libcontainer container kubepods-burstable-pod81a2435b94a5a184fd13b479a3f79521.slice. Jul 7 00:19:35.796551 kubelet[2429]: E0707 00:19:35.796176 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.801361 systemd[1]: Created slice kubepods-burstable-pod076f67ae8629bb131bc105b0de8bcf6f.slice - libcontainer container kubepods-burstable-pod076f67ae8629bb131bc105b0de8bcf6f.slice. Jul 7 00:19:35.815402 kubelet[2429]: E0707 00:19:35.815121 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.819448 systemd[1]: Created slice kubepods-burstable-pod5eb68c534d9878ec96017467359730d1.slice - libcontainer container kubepods-burstable-pod5eb68c534d9878ec96017467359730d1.slice. Jul 7 00:19:35.823060 kubelet[2429]: E0707 00:19:35.822990 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.824425 kubelet[2429]: E0707 00:19:35.824377 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.28:6443: connect: connection refused" interval="400ms" Jul 7 00:19:35.825958 kubelet[2429]: I0707 00:19:35.825841 2429 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.826477 kubelet[2429]: E0707 00:19:35.826417 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.28:6443/api/v1/nodes\": dial tcp 10.128.0.28:6443: connect: connection refused" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.826477 kubelet[2429]: I0707 00:19:35.826432 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5eb68c534d9878ec96017467359730d1-kubeconfig\") pod \"kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"5eb68c534d9878ec96017467359730d1\") " pod="kube-system/kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.826877 kubelet[2429]: I0707 00:19:35.826781 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81a2435b94a5a184fd13b479a3f79521-ca-certs\") pod \"kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"81a2435b94a5a184fd13b479a3f79521\") " pod="kube-system/kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.826877 kubelet[2429]: I0707 00:19:35.826883 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/076f67ae8629bb131bc105b0de8bcf6f-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"076f67ae8629bb131bc105b0de8bcf6f\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.827034 kubelet[2429]: I0707 00:19:35.826918 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/076f67ae8629bb131bc105b0de8bcf6f-k8s-certs\") pod \"kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"076f67ae8629bb131bc105b0de8bcf6f\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.827034 kubelet[2429]: I0707 00:19:35.826948 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/076f67ae8629bb131bc105b0de8bcf6f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"076f67ae8629bb131bc105b0de8bcf6f\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.827034 kubelet[2429]: I0707 00:19:35.826978 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81a2435b94a5a184fd13b479a3f79521-k8s-certs\") pod \"kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"81a2435b94a5a184fd13b479a3f79521\") " pod="kube-system/kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.827034 kubelet[2429]: I0707 00:19:35.827008 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81a2435b94a5a184fd13b479a3f79521-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"81a2435b94a5a184fd13b479a3f79521\") " pod="kube-system/kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.827237 kubelet[2429]: I0707 00:19:35.827040 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/076f67ae8629bb131bc105b0de8bcf6f-ca-certs\") pod \"kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"076f67ae8629bb131bc105b0de8bcf6f\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:35.827237 kubelet[2429]: I0707 00:19:35.827083 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/076f67ae8629bb131bc105b0de8bcf6f-kubeconfig\") pod \"kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"076f67ae8629bb131bc105b0de8bcf6f\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:36.032621 kubelet[2429]: I0707 00:19:36.032556 2429 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:36.033099 kubelet[2429]: E0707 00:19:36.033049 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.28:6443/api/v1/nodes\": dial tcp 10.128.0.28:6443: connect: connection refused" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:36.097846 containerd[1559]: time="2025-07-07T00:19:36.097768352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,Uid:81a2435b94a5a184fd13b479a3f79521,Namespace:kube-system,Attempt:0,}" Jul 7 00:19:36.120415 containerd[1559]: time="2025-07-07T00:19:36.120078152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,Uid:076f67ae8629bb131bc105b0de8bcf6f,Namespace:kube-system,Attempt:0,}" Jul 7 00:19:36.127373 containerd[1559]: time="2025-07-07T00:19:36.127323302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,Uid:5eb68c534d9878ec96017467359730d1,Namespace:kube-system,Attempt:0,}" Jul 7 00:19:36.136164 containerd[1559]: time="2025-07-07T00:19:36.136102949Z" level=info msg="connecting to shim f1d9ea29de45d8331413e8ef8926abb0c9048b5903fb2d996ebf46c04a8f583f" address="unix:///run/containerd/s/6bd407bfc06b2cb4bce0dc014778cd78e11f14100b9e61eb8b21dfc0e0dae3ac" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:19:36.196337 containerd[1559]: time="2025-07-07T00:19:36.196272415Z" level=info msg="connecting to shim c8a8bd383d9b78aba4b338573aba565adb7f2bc04ea740b25498a3ae916ac8a7" address="unix:///run/containerd/s/f8bc55edb5bf4510f865ad6da677374621d21d1044ccf21dee3230d3b94c0b1e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:19:36.215388 systemd[1]: Started cri-containerd-f1d9ea29de45d8331413e8ef8926abb0c9048b5903fb2d996ebf46c04a8f583f.scope - libcontainer container f1d9ea29de45d8331413e8ef8926abb0c9048b5903fb2d996ebf46c04a8f583f. Jul 7 00:19:36.230228 kubelet[2429]: E0707 00:19:36.230171 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.28:6443: connect: connection refused" interval="800ms" Jul 7 00:19:36.240815 containerd[1559]: time="2025-07-07T00:19:36.240066357Z" level=info msg="connecting to shim 8c027893a1c019c9b11356a78efe448059eb8182d4d02a8aa83933b9a994f1e0" address="unix:///run/containerd/s/1643f56de2ef0f469e43b7e0170590d4106f905a418e4fc6485f4626bb7d5488" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:19:36.284394 systemd[1]: Started cri-containerd-c8a8bd383d9b78aba4b338573aba565adb7f2bc04ea740b25498a3ae916ac8a7.scope - libcontainer container c8a8bd383d9b78aba4b338573aba565adb7f2bc04ea740b25498a3ae916ac8a7. Jul 7 00:19:36.324173 systemd[1]: Started cri-containerd-8c027893a1c019c9b11356a78efe448059eb8182d4d02a8aa83933b9a994f1e0.scope - libcontainer container 8c027893a1c019c9b11356a78efe448059eb8182d4d02a8aa83933b9a994f1e0. Jul 7 00:19:36.373153 containerd[1559]: time="2025-07-07T00:19:36.373014455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,Uid:81a2435b94a5a184fd13b479a3f79521,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1d9ea29de45d8331413e8ef8926abb0c9048b5903fb2d996ebf46c04a8f583f\"" Jul 7 00:19:36.382018 kubelet[2429]: E0707 00:19:36.381958 2429 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-21291" Jul 7 00:19:36.386648 containerd[1559]: time="2025-07-07T00:19:36.386584547Z" level=info msg="CreateContainer within sandbox \"f1d9ea29de45d8331413e8ef8926abb0c9048b5903fb2d996ebf46c04a8f583f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:19:36.399825 containerd[1559]: time="2025-07-07T00:19:36.399642827Z" level=info msg="Container 83e896278b1b7f69e0861062670038cfcbb98b78e0529a46fb77e3e4e792ddde: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:19:36.423899 containerd[1559]: time="2025-07-07T00:19:36.422673709Z" level=info msg="CreateContainer within sandbox \"f1d9ea29de45d8331413e8ef8926abb0c9048b5903fb2d996ebf46c04a8f583f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"83e896278b1b7f69e0861062670038cfcbb98b78e0529a46fb77e3e4e792ddde\"" Jul 7 00:19:36.425309 containerd[1559]: time="2025-07-07T00:19:36.425065800Z" level=info msg="StartContainer for \"83e896278b1b7f69e0861062670038cfcbb98b78e0529a46fb77e3e4e792ddde\"" Jul 7 00:19:36.430282 containerd[1559]: time="2025-07-07T00:19:36.430227718Z" level=info msg="connecting to shim 83e896278b1b7f69e0861062670038cfcbb98b78e0529a46fb77e3e4e792ddde" address="unix:///run/containerd/s/6bd407bfc06b2cb4bce0dc014778cd78e11f14100b9e61eb8b21dfc0e0dae3ac" protocol=ttrpc version=3 Jul 7 00:19:36.443778 containerd[1559]: time="2025-07-07T00:19:36.443705913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,Uid:076f67ae8629bb131bc105b0de8bcf6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8a8bd383d9b78aba4b338573aba565adb7f2bc04ea740b25498a3ae916ac8a7\"" Jul 7 00:19:36.447713 kubelet[2429]: E0707 00:19:36.447670 2429 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flat" Jul 7 00:19:36.450309 kubelet[2429]: I0707 00:19:36.450271 2429 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:36.451547 kubelet[2429]: E0707 00:19:36.451206 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.28:6443/api/v1/nodes\": dial tcp 10.128.0.28:6443: connect: connection refused" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:36.454446 containerd[1559]: time="2025-07-07T00:19:36.453895096Z" level=info msg="CreateContainer within sandbox \"c8a8bd383d9b78aba4b338573aba565adb7f2bc04ea740b25498a3ae916ac8a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:19:36.471477 containerd[1559]: time="2025-07-07T00:19:36.471404161Z" level=info msg="Container 0eea9bda72dd580d813b3b710f38b58d2b794617261e13771c75289a9c2d0713: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:19:36.474556 containerd[1559]: time="2025-07-07T00:19:36.474507420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,Uid:5eb68c534d9878ec96017467359730d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c027893a1c019c9b11356a78efe448059eb8182d4d02a8aa83933b9a994f1e0\"" Jul 7 00:19:36.477720 kubelet[2429]: E0707 00:19:36.476665 2429 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-21291" Jul 7 00:19:36.479610 containerd[1559]: time="2025-07-07T00:19:36.479553333Z" level=info msg="CreateContainer within sandbox \"8c027893a1c019c9b11356a78efe448059eb8182d4d02a8aa83933b9a994f1e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:19:36.482391 systemd[1]: Started cri-containerd-83e896278b1b7f69e0861062670038cfcbb98b78e0529a46fb77e3e4e792ddde.scope - libcontainer container 83e896278b1b7f69e0861062670038cfcbb98b78e0529a46fb77e3e4e792ddde. Jul 7 00:19:36.498721 containerd[1559]: time="2025-07-07T00:19:36.497572625Z" level=info msg="Container 5127c5b51e33f51e3804880133972ab0bfcca46c31f7dc5805dfb9b28ca34e56: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:19:36.500098 containerd[1559]: time="2025-07-07T00:19:36.500050824Z" level=info msg="CreateContainer within sandbox \"c8a8bd383d9b78aba4b338573aba565adb7f2bc04ea740b25498a3ae916ac8a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0eea9bda72dd580d813b3b710f38b58d2b794617261e13771c75289a9c2d0713\"" Jul 7 00:19:36.501351 containerd[1559]: time="2025-07-07T00:19:36.501316933Z" level=info msg="StartContainer for \"0eea9bda72dd580d813b3b710f38b58d2b794617261e13771c75289a9c2d0713\"" Jul 7 00:19:36.504998 containerd[1559]: time="2025-07-07T00:19:36.504936778Z" level=info msg="connecting to shim 0eea9bda72dd580d813b3b710f38b58d2b794617261e13771c75289a9c2d0713" address="unix:///run/containerd/s/f8bc55edb5bf4510f865ad6da677374621d21d1044ccf21dee3230d3b94c0b1e" protocol=ttrpc version=3 Jul 7 00:19:36.519189 containerd[1559]: time="2025-07-07T00:19:36.519076824Z" level=info msg="CreateContainer within sandbox \"8c027893a1c019c9b11356a78efe448059eb8182d4d02a8aa83933b9a994f1e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5127c5b51e33f51e3804880133972ab0bfcca46c31f7dc5805dfb9b28ca34e56\"" Jul 7 00:19:36.519901 containerd[1559]: time="2025-07-07T00:19:36.519851981Z" level=info msg="StartContainer for \"5127c5b51e33f51e3804880133972ab0bfcca46c31f7dc5805dfb9b28ca34e56\"" Jul 7 00:19:36.525090 containerd[1559]: time="2025-07-07T00:19:36.525025732Z" level=info msg="connecting to shim 5127c5b51e33f51e3804880133972ab0bfcca46c31f7dc5805dfb9b28ca34e56" address="unix:///run/containerd/s/1643f56de2ef0f469e43b7e0170590d4106f905a418e4fc6485f4626bb7d5488" protocol=ttrpc version=3 Jul 7 00:19:36.564026 systemd[1]: Started cri-containerd-5127c5b51e33f51e3804880133972ab0bfcca46c31f7dc5805dfb9b28ca34e56.scope - libcontainer container 5127c5b51e33f51e3804880133972ab0bfcca46c31f7dc5805dfb9b28ca34e56. Jul 7 00:19:36.578051 systemd[1]: Started cri-containerd-0eea9bda72dd580d813b3b710f38b58d2b794617261e13771c75289a9c2d0713.scope - libcontainer container 0eea9bda72dd580d813b3b710f38b58d2b794617261e13771c75289a9c2d0713. Jul 7 00:19:36.631386 containerd[1559]: time="2025-07-07T00:19:36.629745410Z" level=info msg="StartContainer for \"83e896278b1b7f69e0861062670038cfcbb98b78e0529a46fb77e3e4e792ddde\" returns successfully" Jul 7 00:19:36.633455 kubelet[2429]: W0707 00:19:36.633395 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Jul 7 00:19:36.633719 kubelet[2429]: E0707 00:19:36.633478 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.28:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:19:36.681606 kubelet[2429]: E0707 00:19:36.681552 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:36.759313 containerd[1559]: time="2025-07-07T00:19:36.759150063Z" level=info msg="StartContainer for \"0eea9bda72dd580d813b3b710f38b58d2b794617261e13771c75289a9c2d0713\" returns successfully" Jul 7 00:19:36.812616 containerd[1559]: time="2025-07-07T00:19:36.812562784Z" level=info msg="StartContainer for \"5127c5b51e33f51e3804880133972ab0bfcca46c31f7dc5805dfb9b28ca34e56\" returns successfully" Jul 7 00:19:37.260274 kubelet[2429]: I0707 00:19:37.260229 2429 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:37.725918 kubelet[2429]: E0707 00:19:37.725874 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:37.727156 kubelet[2429]: E0707 00:19:37.727122 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:37.727637 kubelet[2429]: E0707 00:19:37.727604 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:38.717523 kubelet[2429]: E0707 00:19:38.717473 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:38.718444 kubelet[2429]: E0707 00:19:38.718411 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:39.717174 kubelet[2429]: E0707 00:19:39.717071 2429 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:40.439307 kubelet[2429]: I0707 00:19:40.439250 2429 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:40.487842 kubelet[2429]: E0707 00:19:40.485787 2429 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal.184fd0169bffb896 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,UID:ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,},FirstTimestamp:2025-07-07 00:19:35.595497622 +0000 UTC m=+0.615168971,LastTimestamp:2025-07-07 00:19:35.595497622 +0000 UTC m=+0.615168971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal,}" Jul 7 00:19:40.521938 kubelet[2429]: I0707 00:19:40.521880 2429 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:40.529410 kubelet[2429]: E0707 00:19:40.529332 2429 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:40.530590 kubelet[2429]: I0707 00:19:40.529385 2429 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:40.534449 kubelet[2429]: E0707 00:19:40.534390 2429 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:40.534819 kubelet[2429]: I0707 00:19:40.534657 2429 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:40.537299 kubelet[2429]: E0707 00:19:40.537257 2429 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:40.603929 kubelet[2429]: I0707 00:19:40.603880 2429 apiserver.go:52] "Watching apiserver" Jul 7 00:19:40.626405 kubelet[2429]: I0707 00:19:40.626346 2429 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:19:42.458992 systemd[1]: Reload requested from client PID 2700 ('systemctl') (unit session-9.scope)... Jul 7 00:19:42.459016 systemd[1]: Reloading... Jul 7 00:19:42.622862 zram_generator::config[2747]: No configuration found. Jul 7 00:19:42.745445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:19:42.946504 systemd[1]: Reloading finished in 486 ms. Jul 7 00:19:42.987643 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:19:43.003849 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:19:43.004217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:19:43.004299 systemd[1]: kubelet.service: Consumed 1.207s CPU time, 132.8M memory peak. Jul 7 00:19:43.008409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:19:43.339191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:19:43.357617 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:19:43.454624 kubelet[2792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:19:43.454624 kubelet[2792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:19:43.454624 kubelet[2792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:19:43.457398 kubelet[2792]: I0707 00:19:43.457234 2792 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:19:43.488135 kubelet[2792]: I0707 00:19:43.488080 2792 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:19:43.488135 kubelet[2792]: I0707 00:19:43.488130 2792 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:19:43.488740 kubelet[2792]: I0707 00:19:43.488593 2792 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:19:43.499204 kubelet[2792]: I0707 00:19:43.499159 2792 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:19:43.500964 sudo[2805]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 00:19:43.501583 sudo[2805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 00:19:43.507741 kubelet[2792]: I0707 00:19:43.507557 2792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:19:43.516151 kubelet[2792]: I0707 00:19:43.516115 2792 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:19:43.521595 kubelet[2792]: I0707 00:19:43.521546 2792 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:19:43.523652 kubelet[2792]: I0707 00:19:43.523592 2792 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:19:43.524096 kubelet[2792]: I0707 00:19:43.523650 2792 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:19:43.524273 kubelet[2792]: I0707 00:19:43.524109 2792 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:19:43.524273 kubelet[2792]: I0707 00:19:43.524129 2792 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:19:43.524273 kubelet[2792]: I0707 00:19:43.524207 2792 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:19:43.525782 kubelet[2792]: I0707 00:19:43.524446 2792 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:19:43.525782 kubelet[2792]: I0707 00:19:43.524480 2792 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:19:43.525782 kubelet[2792]: I0707 00:19:43.524513 2792 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:19:43.525782 kubelet[2792]: I0707 00:19:43.524529 2792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:19:43.540577 kubelet[2792]: I0707 00:19:43.538719 2792 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:19:43.540577 kubelet[2792]: I0707 00:19:43.539505 2792 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:19:43.544203 kubelet[2792]: I0707 00:19:43.544119 2792 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:19:43.544203 kubelet[2792]: I0707 00:19:43.544176 2792 server.go:1287] "Started kubelet" Jul 7 00:19:43.552001 kubelet[2792]: I0707 00:19:43.551728 2792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:19:43.571856 kubelet[2792]: I0707 00:19:43.569081 2792 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:19:43.572450 kubelet[2792]: I0707 00:19:43.572419 2792 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:19:43.582600 kubelet[2792]: I0707 00:19:43.582515 2792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:19:43.583532 kubelet[2792]: I0707 00:19:43.583414 2792 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:19:43.587753 kubelet[2792]: I0707 00:19:43.587317 2792 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:19:43.591290 kubelet[2792]: I0707 00:19:43.590943 2792 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:19:43.591443 kubelet[2792]: E0707 00:19:43.591359 2792 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" not found" Jul 7 00:19:43.596570 kubelet[2792]: I0707 00:19:43.595010 2792 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:19:43.596570 kubelet[2792]: I0707 00:19:43.595265 2792 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:19:43.626373 kubelet[2792]: E0707 00:19:43.626023 2792 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:19:43.638351 kubelet[2792]: I0707 00:19:43.636256 2792 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:19:43.638351 kubelet[2792]: I0707 00:19:43.636287 2792 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:19:43.638351 kubelet[2792]: I0707 00:19:43.636434 2792 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:19:43.681520 kubelet[2792]: I0707 00:19:43.681472 2792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:19:43.698018 kubelet[2792]: I0707 00:19:43.695903 2792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:19:43.698018 kubelet[2792]: I0707 00:19:43.696000 2792 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:19:43.698018 kubelet[2792]: I0707 00:19:43.696073 2792 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:19:43.698018 kubelet[2792]: I0707 00:19:43.696095 2792 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:19:43.700259 kubelet[2792]: E0707 00:19:43.700097 2792 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:19:43.786175 kubelet[2792]: I0707 00:19:43.786136 2792 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:19:43.786175 kubelet[2792]: I0707 00:19:43.786166 2792 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:19:43.786417 kubelet[2792]: I0707 00:19:43.786372 2792 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:19:43.787312 kubelet[2792]: I0707 00:19:43.786623 2792 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:19:43.787312 kubelet[2792]: I0707 00:19:43.786645 2792 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:19:43.787312 kubelet[2792]: I0707 00:19:43.786674 2792 policy_none.go:49] "None policy: Start" Jul 7 00:19:43.787312 kubelet[2792]: I0707 00:19:43.786690 2792 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:19:43.787312 kubelet[2792]: I0707 00:19:43.786706 2792 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:19:43.787312 kubelet[2792]: I0707 00:19:43.786873 2792 state_mem.go:75] "Updated machine memory state" Jul 7 00:19:43.799506 kubelet[2792]: I0707 00:19:43.799466 2792 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:19:43.800542 kubelet[2792]: I0707 00:19:43.799709 2792 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:19:43.800542 kubelet[2792]: I0707 00:19:43.799739 2792 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:19:43.803211 kubelet[2792]: I0707 00:19:43.803178 2792 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.808613 kubelet[2792]: I0707 00:19:43.808202 2792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:19:43.808613 kubelet[2792]: I0707 00:19:43.808526 2792 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.812761 kubelet[2792]: I0707 00:19:43.812078 2792 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.812948 kubelet[2792]: E0707 00:19:43.812787 2792 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:19:43.843725 kubelet[2792]: W0707 00:19:43.843681 2792 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 7 00:19:43.851099 kubelet[2792]: W0707 00:19:43.850719 2792 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 7 00:19:43.854465 kubelet[2792]: W0707 00:19:43.854412 2792 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 7 00:19:43.897098 kubelet[2792]: I0707 00:19:43.897024 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81a2435b94a5a184fd13b479a3f79521-k8s-certs\") pod \"kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"81a2435b94a5a184fd13b479a3f79521\") " pod="kube-system/kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.897098 kubelet[2792]: I0707 00:19:43.897100 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/076f67ae8629bb131bc105b0de8bcf6f-ca-certs\") pod \"kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"076f67ae8629bb131bc105b0de8bcf6f\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.897358 kubelet[2792]: I0707 00:19:43.897133 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/076f67ae8629bb131bc105b0de8bcf6f-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"076f67ae8629bb131bc105b0de8bcf6f\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.897358 kubelet[2792]: I0707 00:19:43.897161 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/076f67ae8629bb131bc105b0de8bcf6f-k8s-certs\") pod \"kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"076f67ae8629bb131bc105b0de8bcf6f\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.897358 kubelet[2792]: I0707 00:19:43.897193 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/076f67ae8629bb131bc105b0de8bcf6f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"076f67ae8629bb131bc105b0de8bcf6f\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.897358 kubelet[2792]: I0707 00:19:43.897224 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81a2435b94a5a184fd13b479a3f79521-ca-certs\") pod \"kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"81a2435b94a5a184fd13b479a3f79521\") " pod="kube-system/kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.897564 kubelet[2792]: I0707 00:19:43.897260 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/076f67ae8629bb131bc105b0de8bcf6f-kubeconfig\") pod \"kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"076f67ae8629bb131bc105b0de8bcf6f\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.897564 kubelet[2792]: I0707 00:19:43.897286 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5eb68c534d9878ec96017467359730d1-kubeconfig\") pod \"kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"5eb68c534d9878ec96017467359730d1\") " pod="kube-system/kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.897564 kubelet[2792]: I0707 00:19:43.897316 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81a2435b94a5a184fd13b479a3f79521-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" (UID: \"81a2435b94a5a184fd13b479a3f79521\") " pod="kube-system/kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.933259 kubelet[2792]: I0707 00:19:43.933175 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.955143 kubelet[2792]: I0707 00:19:43.955008 2792 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.955848 kubelet[2792]: I0707 00:19:43.955634 2792 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" Jul 7 00:19:43.988981 update_engine[1554]: I20250707 00:19:43.988850 1554 update_attempter.cc:509] Updating boot flags... Jul 7 00:19:44.545170 kubelet[2792]: I0707 00:19:44.543484 2792 apiserver.go:52] "Watching apiserver" Jul 7 00:19:44.597284 kubelet[2792]: I0707 00:19:44.595725 2792 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:19:44.719760 kubelet[2792]: I0707 00:19:44.719481 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" podStartSLOduration=1.719459401 podStartE2EDuration="1.719459401s" podCreationTimestamp="2025-07-07 00:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:19:44.718431764 +0000 UTC m=+1.351701960" watchObservedRunningTime="2025-07-07 00:19:44.719459401 +0000 UTC m=+1.352729595" Jul 7 00:19:44.761819 kubelet[2792]: I0707 00:19:44.761295 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" podStartSLOduration=1.7612661840000001 podStartE2EDuration="1.761266184s" podCreationTimestamp="2025-07-07 00:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:19:44.740677125 +0000 UTC m=+1.373947321" watchObservedRunningTime="2025-07-07 00:19:44.761266184 +0000 UTC m=+1.394536376" Jul 7 00:19:44.775972 sudo[2805]: pam_unix(sudo:session): session closed for user root Jul 7 00:19:44.778936 kubelet[2792]: I0707 00:19:44.778775 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" podStartSLOduration=1.778750961 podStartE2EDuration="1.778750961s" podCreationTimestamp="2025-07-07 00:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:19:44.762767519 +0000 UTC m=+1.396037715" watchObservedRunningTime="2025-07-07 00:19:44.778750961 +0000 UTC m=+1.412021157" Jul 7 00:19:47.311780 sudo[1893]: pam_unix(sudo:session): session closed for user root Jul 7 00:19:47.354940 sshd[1892]: Connection closed by 139.178.68.195 port 53546 Jul 7 00:19:47.355871 sshd-session[1890]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:47.361251 systemd[1]: sshd@8-10.128.0.28:22-139.178.68.195:53546.service: Deactivated successfully. Jul 7 00:19:47.364659 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:19:47.365064 systemd[1]: session-9.scope: Consumed 7.069s CPU time, 269.9M memory peak. Jul 7 00:19:47.368598 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:19:47.371372 systemd-logind[1548]: Removed session 9. Jul 7 00:19:47.741823 kubelet[2792]: I0707 00:19:47.741372 2792 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:19:47.742410 containerd[1559]: time="2025-07-07T00:19:47.742206878Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:19:47.742887 kubelet[2792]: I0707 00:19:47.742617 2792 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:19:48.532266 systemd[1]: Created slice kubepods-besteffort-podcac2c492_b525_415d_b250_28a98de7db0c.slice - libcontainer container kubepods-besteffort-podcac2c492_b525_415d_b250_28a98de7db0c.slice. Jul 7 00:19:48.579933 systemd[1]: Created slice kubepods-burstable-podfe3baa84_9318_4e77_9d2f_8abe63724c57.slice - libcontainer container kubepods-burstable-podfe3baa84_9318_4e77_9d2f_8abe63724c57.slice. Jul 7 00:19:48.641823 kubelet[2792]: I0707 00:19:48.641569 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-run\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.641823 kubelet[2792]: I0707 00:19:48.641671 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-bpf-maps\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.641823 kubelet[2792]: I0707 00:19:48.641699 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-lib-modules\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.641823 kubelet[2792]: I0707 00:19:48.641767 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-host-proc-sys-net\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.641823 kubelet[2792]: I0707 00:19:48.641827 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-host-proc-sys-kernel\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.642309 kubelet[2792]: I0707 00:19:48.641863 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-xtables-lock\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.642309 kubelet[2792]: I0707 00:19:48.641918 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe3baa84-9318-4e77-9d2f-8abe63724c57-clustermesh-secrets\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.642309 kubelet[2792]: I0707 00:19:48.641944 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-etc-cni-netd\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.642309 kubelet[2792]: I0707 00:19:48.642011 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cac2c492-b525-415d-b250-28a98de7db0c-lib-modules\") pod \"kube-proxy-qgn9f\" (UID: \"cac2c492-b525-415d-b250-28a98de7db0c\") " pod="kube-system/kube-proxy-qgn9f" Jul 7 00:19:48.642309 kubelet[2792]: I0707 00:19:48.642068 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-hostproc\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.642309 kubelet[2792]: I0707 00:19:48.642095 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-cgroup\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.642760 kubelet[2792]: I0707 00:19:48.642154 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cac2c492-b525-415d-b250-28a98de7db0c-kube-proxy\") pod \"kube-proxy-qgn9f\" (UID: \"cac2c492-b525-415d-b250-28a98de7db0c\") " pod="kube-system/kube-proxy-qgn9f" Jul 7 00:19:48.642760 kubelet[2792]: I0707 00:19:48.642239 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cac2c492-b525-415d-b250-28a98de7db0c-xtables-lock\") pod \"kube-proxy-qgn9f\" (UID: \"cac2c492-b525-415d-b250-28a98de7db0c\") " pod="kube-system/kube-proxy-qgn9f" Jul 7 00:19:48.642760 kubelet[2792]: I0707 00:19:48.642270 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-442d5\" (UniqueName: \"kubernetes.io/projected/cac2c492-b525-415d-b250-28a98de7db0c-kube-api-access-442d5\") pod \"kube-proxy-qgn9f\" (UID: \"cac2c492-b525-415d-b250-28a98de7db0c\") " pod="kube-system/kube-proxy-qgn9f" Jul 7 00:19:48.642760 kubelet[2792]: I0707 00:19:48.642360 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cni-path\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.642760 kubelet[2792]: I0707 00:19:48.642388 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-config-path\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.743828 kubelet[2792]: I0707 00:19:48.743247 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe3baa84-9318-4e77-9d2f-8abe63724c57-hubble-tls\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.743828 kubelet[2792]: I0707 00:19:48.743745 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grvfn\" (UniqueName: \"kubernetes.io/projected/fe3baa84-9318-4e77-9d2f-8abe63724c57-kube-api-access-grvfn\") pod \"cilium-lfqg5\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " pod="kube-system/cilium-lfqg5" Jul 7 00:19:48.810136 systemd[1]: Created slice kubepods-besteffort-pod843880a1_7803_4479_97f7_690f1e2791e4.slice - libcontainer container kubepods-besteffort-pod843880a1_7803_4479_97f7_690f1e2791e4.slice. Jul 7 00:19:48.846159 containerd[1559]: time="2025-07-07T00:19:48.846099606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qgn9f,Uid:cac2c492-b525-415d-b250-28a98de7db0c,Namespace:kube-system,Attempt:0,}" Jul 7 00:19:48.848025 kubelet[2792]: I0707 00:19:48.845774 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d42x2\" (UniqueName: \"kubernetes.io/projected/843880a1-7803-4479-97f7-690f1e2791e4-kube-api-access-d42x2\") pod \"cilium-operator-6c4d7847fc-rbv4t\" (UID: \"843880a1-7803-4479-97f7-690f1e2791e4\") " pod="kube-system/cilium-operator-6c4d7847fc-rbv4t" Jul 7 00:19:48.848246 kubelet[2792]: I0707 00:19:48.848200 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/843880a1-7803-4479-97f7-690f1e2791e4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rbv4t\" (UID: \"843880a1-7803-4479-97f7-690f1e2791e4\") " pod="kube-system/cilium-operator-6c4d7847fc-rbv4t" Jul 7 00:19:48.907537 containerd[1559]: time="2025-07-07T00:19:48.907483696Z" level=info msg="connecting to shim 48f77558b72f7ddbe2c79c7b7f6328c82cd44d5c75ee43b2429e14a2b99b4c5b" address="unix:///run/containerd/s/700852cfa951ac2a5c683af19269f123ca7a1e42cf4d4c752c993605961c129e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:19:48.943187 systemd[1]: Started cri-containerd-48f77558b72f7ddbe2c79c7b7f6328c82cd44d5c75ee43b2429e14a2b99b4c5b.scope - libcontainer container 48f77558b72f7ddbe2c79c7b7f6328c82cd44d5c75ee43b2429e14a2b99b4c5b. Jul 7 00:19:49.000600 containerd[1559]: time="2025-07-07T00:19:49.000518708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qgn9f,Uid:cac2c492-b525-415d-b250-28a98de7db0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"48f77558b72f7ddbe2c79c7b7f6328c82cd44d5c75ee43b2429e14a2b99b4c5b\"" Jul 7 00:19:49.005837 containerd[1559]: time="2025-07-07T00:19:49.005348398Z" level=info msg="CreateContainer within sandbox \"48f77558b72f7ddbe2c79c7b7f6328c82cd44d5c75ee43b2429e14a2b99b4c5b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:19:49.020872 containerd[1559]: time="2025-07-07T00:19:49.020780917Z" level=info msg="Container 4121d6883cdc018ec95b06ddcc5262e3085647268bad69f4c54800b21a3c983c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:19:49.031179 containerd[1559]: time="2025-07-07T00:19:49.031107733Z" level=info msg="CreateContainer within sandbox \"48f77558b72f7ddbe2c79c7b7f6328c82cd44d5c75ee43b2429e14a2b99b4c5b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4121d6883cdc018ec95b06ddcc5262e3085647268bad69f4c54800b21a3c983c\"" Jul 7 00:19:49.032200 containerd[1559]: time="2025-07-07T00:19:49.032154964Z" level=info msg="StartContainer for \"4121d6883cdc018ec95b06ddcc5262e3085647268bad69f4c54800b21a3c983c\"" Jul 7 00:19:49.035092 containerd[1559]: time="2025-07-07T00:19:49.035026332Z" level=info msg="connecting to shim 4121d6883cdc018ec95b06ddcc5262e3085647268bad69f4c54800b21a3c983c" address="unix:///run/containerd/s/700852cfa951ac2a5c683af19269f123ca7a1e42cf4d4c752c993605961c129e" protocol=ttrpc version=3 Jul 7 00:19:49.063169 systemd[1]: Started cri-containerd-4121d6883cdc018ec95b06ddcc5262e3085647268bad69f4c54800b21a3c983c.scope - libcontainer container 4121d6883cdc018ec95b06ddcc5262e3085647268bad69f4c54800b21a3c983c. Jul 7 00:19:49.125086 containerd[1559]: time="2025-07-07T00:19:49.125030195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rbv4t,Uid:843880a1-7803-4479-97f7-690f1e2791e4,Namespace:kube-system,Attempt:0,}" Jul 7 00:19:49.133755 containerd[1559]: time="2025-07-07T00:19:49.133666377Z" level=info msg="StartContainer for \"4121d6883cdc018ec95b06ddcc5262e3085647268bad69f4c54800b21a3c983c\" returns successfully" Jul 7 00:19:49.178880 containerd[1559]: time="2025-07-07T00:19:49.178454467Z" level=info msg="connecting to shim 4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83" address="unix:///run/containerd/s/f6dc072e363c5c32f73b1e3f9edf5fa76498e922f7122a5f37b8119d05f244e6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:19:49.191826 containerd[1559]: time="2025-07-07T00:19:49.191761831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lfqg5,Uid:fe3baa84-9318-4e77-9d2f-8abe63724c57,Namespace:kube-system,Attempt:0,}" Jul 7 00:19:49.223207 systemd[1]: Started cri-containerd-4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83.scope - libcontainer container 4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83. Jul 7 00:19:49.259297 containerd[1559]: time="2025-07-07T00:19:49.259240981Z" level=info msg="connecting to shim b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504" address="unix:///run/containerd/s/9b7e80424e77ee3071a888a5e935b1c586739b28eab4b767e9ba4982ee3ea249" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:19:49.317112 systemd[1]: Started cri-containerd-b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504.scope - libcontainer container b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504. Jul 7 00:19:49.387218 containerd[1559]: time="2025-07-07T00:19:49.387139915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lfqg5,Uid:fe3baa84-9318-4e77-9d2f-8abe63724c57,Namespace:kube-system,Attempt:0,} returns sandbox id \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\"" Jul 7 00:19:49.395697 containerd[1559]: time="2025-07-07T00:19:49.395222120Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 00:19:49.400937 containerd[1559]: time="2025-07-07T00:19:49.400858831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rbv4t,Uid:843880a1-7803-4479-97f7-690f1e2791e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\"" Jul 7 00:19:49.807259 kubelet[2792]: I0707 00:19:49.807111 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qgn9f" podStartSLOduration=1.8070841949999998 podStartE2EDuration="1.807084195s" podCreationTimestamp="2025-07-07 00:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:19:49.805692337 +0000 UTC m=+6.438962534" watchObservedRunningTime="2025-07-07 00:19:49.807084195 +0000 UTC m=+6.440354391" Jul 7 00:19:55.117202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1490295522.mount: Deactivated successfully. Jul 7 00:19:58.040018 containerd[1559]: time="2025-07-07T00:19:58.039927064Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:58.041431 containerd[1559]: time="2025-07-07T00:19:58.041372973Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 00:19:58.043203 containerd[1559]: time="2025-07-07T00:19:58.043128165Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:19:58.045303 containerd[1559]: time="2025-07-07T00:19:58.045127362Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.649175313s" Jul 7 00:19:58.045303 containerd[1559]: time="2025-07-07T00:19:58.045178717Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 00:19:58.047481 containerd[1559]: time="2025-07-07T00:19:58.047319288Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 00:19:58.049759 containerd[1559]: time="2025-07-07T00:19:58.049698634Z" level=info msg="CreateContainer within sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:19:58.064316 containerd[1559]: time="2025-07-07T00:19:58.064260818Z" level=info msg="Container c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:19:58.079548 containerd[1559]: time="2025-07-07T00:19:58.079474726Z" level=info msg="CreateContainer within sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\"" Jul 7 00:19:58.080283 containerd[1559]: time="2025-07-07T00:19:58.080224455Z" level=info msg="StartContainer for \"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\"" Jul 7 00:19:58.082327 containerd[1559]: time="2025-07-07T00:19:58.082238434Z" level=info msg="connecting to shim c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5" address="unix:///run/containerd/s/9b7e80424e77ee3071a888a5e935b1c586739b28eab4b767e9ba4982ee3ea249" protocol=ttrpc version=3 Jul 7 00:19:58.123131 systemd[1]: Started cri-containerd-c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5.scope - libcontainer container c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5. Jul 7 00:19:58.173257 containerd[1559]: time="2025-07-07T00:19:58.173199148Z" level=info msg="StartContainer for \"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\" returns successfully" Jul 7 00:19:58.194319 systemd[1]: cri-containerd-c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5.scope: Deactivated successfully. Jul 7 00:19:58.197645 containerd[1559]: time="2025-07-07T00:19:58.197596573Z" level=info msg="received exit event container_id:\"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\" id:\"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\" pid:3224 exited_at:{seconds:1751847598 nanos:197003355}" Jul 7 00:19:58.198582 containerd[1559]: time="2025-07-07T00:19:58.198547591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\" id:\"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\" pid:3224 exited_at:{seconds:1751847598 nanos:197003355}" Jul 7 00:19:58.233678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5-rootfs.mount: Deactivated successfully. Jul 7 00:20:00.832167 containerd[1559]: time="2025-07-07T00:20:00.832037691Z" level=info msg="CreateContainer within sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:20:00.858835 containerd[1559]: time="2025-07-07T00:20:00.855569098Z" level=info msg="Container 166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:00.871147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454745620.mount: Deactivated successfully. Jul 7 00:20:00.880133 containerd[1559]: time="2025-07-07T00:20:00.879144478Z" level=info msg="CreateContainer within sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\"" Jul 7 00:20:00.883590 containerd[1559]: time="2025-07-07T00:20:00.883351068Z" level=info msg="StartContainer for \"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\"" Jul 7 00:20:00.887622 containerd[1559]: time="2025-07-07T00:20:00.887574676Z" level=info msg="connecting to shim 166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800" address="unix:///run/containerd/s/9b7e80424e77ee3071a888a5e935b1c586739b28eab4b767e9ba4982ee3ea249" protocol=ttrpc version=3 Jul 7 00:20:00.939143 systemd[1]: Started cri-containerd-166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800.scope - libcontainer container 166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800. Jul 7 00:20:00.999348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300483175.mount: Deactivated successfully. Jul 7 00:20:01.054132 containerd[1559]: time="2025-07-07T00:20:01.053951679Z" level=info msg="StartContainer for \"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\" returns successfully" Jul 7 00:20:01.073387 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:20:01.073890 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:20:01.075836 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:20:01.083126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:20:01.096942 systemd[1]: cri-containerd-166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800.scope: Deactivated successfully. Jul 7 00:20:01.104478 containerd[1559]: time="2025-07-07T00:20:01.103491215Z" level=info msg="received exit event container_id:\"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\" id:\"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\" pid:3269 exited_at:{seconds:1751847601 nanos:96713524}" Jul 7 00:20:01.105910 containerd[1559]: time="2025-07-07T00:20:01.105648905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\" id:\"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\" pid:3269 exited_at:{seconds:1751847601 nanos:96713524}" Jul 7 00:20:01.136026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:20:01.856754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800-rootfs.mount: Deactivated successfully. Jul 7 00:20:01.860238 containerd[1559]: time="2025-07-07T00:20:01.859746957Z" level=info msg="CreateContainer within sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:20:01.904658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037118510.mount: Deactivated successfully. Jul 7 00:20:01.905523 containerd[1559]: time="2025-07-07T00:20:01.905461044Z" level=info msg="Container 12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:01.930122 containerd[1559]: time="2025-07-07T00:20:01.930066407Z" level=info msg="CreateContainer within sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\"" Jul 7 00:20:01.931444 containerd[1559]: time="2025-07-07T00:20:01.931407696Z" level=info msg="StartContainer for \"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\"" Jul 7 00:20:01.934841 containerd[1559]: time="2025-07-07T00:20:01.933768482Z" level=info msg="connecting to shim 12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5" address="unix:///run/containerd/s/9b7e80424e77ee3071a888a5e935b1c586739b28eab4b767e9ba4982ee3ea249" protocol=ttrpc version=3 Jul 7 00:20:01.983871 containerd[1559]: time="2025-07-07T00:20:01.983139734Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:20:01.983414 systemd[1]: Started cri-containerd-12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5.scope - libcontainer container 12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5. Jul 7 00:20:01.986440 containerd[1559]: time="2025-07-07T00:20:01.986384306Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 00:20:01.989182 containerd[1559]: time="2025-07-07T00:20:01.989105801Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:20:01.994186 containerd[1559]: time="2025-07-07T00:20:01.994126320Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.946506013s" Jul 7 00:20:01.995108 containerd[1559]: time="2025-07-07T00:20:01.994906629Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 00:20:01.999851 containerd[1559]: time="2025-07-07T00:20:01.999625589Z" level=info msg="CreateContainer within sandbox \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 00:20:02.018087 containerd[1559]: time="2025-07-07T00:20:02.017996579Z" level=info msg="Container c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:02.044037 containerd[1559]: time="2025-07-07T00:20:02.043964266Z" level=info msg="CreateContainer within sandbox \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\"" Jul 7 00:20:02.045568 containerd[1559]: time="2025-07-07T00:20:02.045343193Z" level=info msg="StartContainer for \"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\"" Jul 7 00:20:02.050669 containerd[1559]: time="2025-07-07T00:20:02.050611734Z" level=info msg="connecting to shim c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89" address="unix:///run/containerd/s/f6dc072e363c5c32f73b1e3f9edf5fa76498e922f7122a5f37b8119d05f244e6" protocol=ttrpc version=3 Jul 7 00:20:02.085315 systemd[1]: cri-containerd-12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5.scope: Deactivated successfully. Jul 7 00:20:02.091008 containerd[1559]: time="2025-07-07T00:20:02.090924069Z" level=info msg="received exit event container_id:\"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\" id:\"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\" pid:3333 exited_at:{seconds:1751847602 nanos:87241599}" Jul 7 00:20:02.092161 containerd[1559]: time="2025-07-07T00:20:02.091684566Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\" id:\"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\" pid:3333 exited_at:{seconds:1751847602 nanos:87241599}" Jul 7 00:20:02.103361 systemd[1]: Started cri-containerd-c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89.scope - libcontainer container c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89. Jul 7 00:20:02.121838 containerd[1559]: time="2025-07-07T00:20:02.121331588Z" level=info msg="StartContainer for \"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\" returns successfully" Jul 7 00:20:02.248648 containerd[1559]: time="2025-07-07T00:20:02.248472425Z" level=info msg="StartContainer for \"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\" returns successfully" Jul 7 00:20:02.859836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219683800.mount: Deactivated successfully. Jul 7 00:20:02.859995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5-rootfs.mount: Deactivated successfully. Jul 7 00:20:02.874833 containerd[1559]: time="2025-07-07T00:20:02.874378227Z" level=info msg="CreateContainer within sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:20:02.899079 containerd[1559]: time="2025-07-07T00:20:02.899000217Z" level=info msg="Container 2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:02.906572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2980202075.mount: Deactivated successfully. Jul 7 00:20:02.928584 containerd[1559]: time="2025-07-07T00:20:02.928523730Z" level=info msg="CreateContainer within sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\"" Jul 7 00:20:02.929495 containerd[1559]: time="2025-07-07T00:20:02.929359683Z" level=info msg="StartContainer for \"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\"" Jul 7 00:20:02.932253 containerd[1559]: time="2025-07-07T00:20:02.932202633Z" level=info msg="connecting to shim 2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a" address="unix:///run/containerd/s/9b7e80424e77ee3071a888a5e935b1c586739b28eab4b767e9ba4982ee3ea249" protocol=ttrpc version=3 Jul 7 00:20:03.012246 systemd[1]: Started cri-containerd-2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a.scope - libcontainer container 2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a. Jul 7 00:20:03.175394 systemd[1]: cri-containerd-2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a.scope: Deactivated successfully. Jul 7 00:20:03.179923 containerd[1559]: time="2025-07-07T00:20:03.178244837Z" level=info msg="received exit event container_id:\"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\" id:\"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\" pid:3404 exited_at:{seconds:1751847603 nanos:177215519}" Jul 7 00:20:03.181399 containerd[1559]: time="2025-07-07T00:20:03.181312647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\" id:\"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\" pid:3404 exited_at:{seconds:1751847603 nanos:177215519}" Jul 7 00:20:03.207181 containerd[1559]: time="2025-07-07T00:20:03.206921649Z" level=info msg="StartContainer for \"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\" returns successfully" Jul 7 00:20:03.250515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a-rootfs.mount: Deactivated successfully. Jul 7 00:20:03.902838 containerd[1559]: time="2025-07-07T00:20:03.902319006Z" level=info msg="CreateContainer within sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:20:03.931922 containerd[1559]: time="2025-07-07T00:20:03.930888124Z" level=info msg="Container 1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:03.950158 containerd[1559]: time="2025-07-07T00:20:03.949705738Z" level=info msg="CreateContainer within sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\"" Jul 7 00:20:03.951474 containerd[1559]: time="2025-07-07T00:20:03.951414146Z" level=info msg="StartContainer for \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\"" Jul 7 00:20:03.955266 containerd[1559]: time="2025-07-07T00:20:03.955211127Z" level=info msg="connecting to shim 1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f" address="unix:///run/containerd/s/9b7e80424e77ee3071a888a5e935b1c586739b28eab4b767e9ba4982ee3ea249" protocol=ttrpc version=3 Jul 7 00:20:03.968652 kubelet[2792]: I0707 00:20:03.968184 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rbv4t" podStartSLOduration=3.375667346 podStartE2EDuration="15.968154748s" podCreationTimestamp="2025-07-07 00:19:48 +0000 UTC" firstStartedPulling="2025-07-07 00:19:49.404344522 +0000 UTC m=+6.037614701" lastFinishedPulling="2025-07-07 00:20:01.996831917 +0000 UTC m=+18.630102103" observedRunningTime="2025-07-07 00:20:03.269099391 +0000 UTC m=+19.902369586" watchObservedRunningTime="2025-07-07 00:20:03.968154748 +0000 UTC m=+20.601424939" Jul 7 00:20:04.007361 systemd[1]: Started cri-containerd-1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f.scope - libcontainer container 1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f. Jul 7 00:20:04.071606 containerd[1559]: time="2025-07-07T00:20:04.071468700Z" level=info msg="StartContainer for \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" returns successfully" Jul 7 00:20:04.183253 containerd[1559]: time="2025-07-07T00:20:04.183115290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" id:\"5bd1b6812dd2a0a7c26ac55992f6b562d2fb6854f55f9837293a53d7ae23719b\" pid:3474 exited_at:{seconds:1751847604 nanos:182574973}" Jul 7 00:20:04.198407 kubelet[2792]: I0707 00:20:04.198272 2792 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:20:04.257027 systemd[1]: Created slice kubepods-burstable-pod07d60e40_b58a_43d5_9d82_13d8c11fbb2f.slice - libcontainer container kubepods-burstable-pod07d60e40_b58a_43d5_9d82_13d8c11fbb2f.slice. Jul 7 00:20:04.271837 kubelet[2792]: I0707 00:20:04.271020 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07d60e40-b58a-43d5-9d82-13d8c11fbb2f-config-volume\") pod \"coredns-668d6bf9bc-vnq42\" (UID: \"07d60e40-b58a-43d5-9d82-13d8c11fbb2f\") " pod="kube-system/coredns-668d6bf9bc-vnq42" Jul 7 00:20:04.272302 kubelet[2792]: I0707 00:20:04.272267 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zplp\" (UniqueName: \"kubernetes.io/projected/07d60e40-b58a-43d5-9d82-13d8c11fbb2f-kube-api-access-2zplp\") pod \"coredns-668d6bf9bc-vnq42\" (UID: \"07d60e40-b58a-43d5-9d82-13d8c11fbb2f\") " pod="kube-system/coredns-668d6bf9bc-vnq42" Jul 7 00:20:04.272485 kubelet[2792]: I0707 00:20:04.272463 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fn6j\" (UniqueName: \"kubernetes.io/projected/dd4181f7-a225-41d0-9176-569757f037b4-kube-api-access-6fn6j\") pod \"coredns-668d6bf9bc-fmtb7\" (UID: \"dd4181f7-a225-41d0-9176-569757f037b4\") " pod="kube-system/coredns-668d6bf9bc-fmtb7" Jul 7 00:20:04.272624 kubelet[2792]: I0707 00:20:04.272605 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd4181f7-a225-41d0-9176-569757f037b4-config-volume\") pod \"coredns-668d6bf9bc-fmtb7\" (UID: \"dd4181f7-a225-41d0-9176-569757f037b4\") " pod="kube-system/coredns-668d6bf9bc-fmtb7" Jul 7 00:20:04.278498 systemd[1]: Created slice kubepods-burstable-poddd4181f7_a225_41d0_9176_569757f037b4.slice - libcontainer container kubepods-burstable-poddd4181f7_a225_41d0_9176_569757f037b4.slice. Jul 7 00:20:04.568220 containerd[1559]: time="2025-07-07T00:20:04.567053500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vnq42,Uid:07d60e40-b58a-43d5-9d82-13d8c11fbb2f,Namespace:kube-system,Attempt:0,}" Jul 7 00:20:04.592640 containerd[1559]: time="2025-07-07T00:20:04.592293868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmtb7,Uid:dd4181f7-a225-41d0-9176-569757f037b4,Namespace:kube-system,Attempt:0,}" Jul 7 00:20:04.934826 kubelet[2792]: I0707 00:20:04.932004 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lfqg5" podStartSLOduration=8.279117686 podStartE2EDuration="16.931977501s" podCreationTimestamp="2025-07-07 00:19:48 +0000 UTC" firstStartedPulling="2025-07-07 00:19:49.393697035 +0000 UTC m=+6.026967223" lastFinishedPulling="2025-07-07 00:19:58.046556851 +0000 UTC m=+14.679827038" observedRunningTime="2025-07-07 00:20:04.931466036 +0000 UTC m=+21.564736238" watchObservedRunningTime="2025-07-07 00:20:04.931977501 +0000 UTC m=+21.565247696" Jul 7 00:20:06.721112 systemd-networkd[1473]: cilium_host: Link UP Jul 7 00:20:06.727122 systemd-networkd[1473]: cilium_net: Link UP Jul 7 00:20:06.729028 systemd-networkd[1473]: cilium_net: Gained carrier Jul 7 00:20:06.731833 systemd-networkd[1473]: cilium_host: Gained carrier Jul 7 00:20:06.883295 systemd-networkd[1473]: cilium_net: Gained IPv6LL Jul 7 00:20:06.891028 systemd-networkd[1473]: cilium_vxlan: Link UP Jul 7 00:20:06.891039 systemd-networkd[1473]: cilium_vxlan: Gained carrier Jul 7 00:20:07.115057 systemd-networkd[1473]: cilium_host: Gained IPv6LL Jul 7 00:20:07.187843 kernel: NET: Registered PF_ALG protocol family Jul 7 00:20:08.117114 systemd-networkd[1473]: lxc_health: Link UP Jul 7 00:20:08.129116 systemd-networkd[1473]: lxc_health: Gained carrier Jul 7 00:20:08.637453 systemd-networkd[1473]: lxc0174cb4568ed: Link UP Jul 7 00:20:08.647854 kernel: eth0: renamed from tmp19237 Jul 7 00:20:08.656040 systemd-networkd[1473]: lxc0174cb4568ed: Gained carrier Jul 7 00:20:08.684033 systemd-networkd[1473]: lxc4fd4d0421650: Link UP Jul 7 00:20:08.705741 kernel: eth0: renamed from tmp6b985 Jul 7 00:20:08.710286 systemd-networkd[1473]: cilium_vxlan: Gained IPv6LL Jul 7 00:20:08.719917 systemd-networkd[1473]: lxc4fd4d0421650: Gained carrier Jul 7 00:20:09.156004 systemd-networkd[1473]: lxc_health: Gained IPv6LL Jul 7 00:20:10.371171 systemd-networkd[1473]: lxc0174cb4568ed: Gained IPv6LL Jul 7 00:20:10.627245 systemd-networkd[1473]: lxc4fd4d0421650: Gained IPv6LL Jul 7 00:20:13.584093 ntpd[1542]: Listen normally on 7 cilium_host 192.168.0.37:123 Jul 7 00:20:13.585172 ntpd[1542]: 7 Jul 00:20:13 ntpd[1542]: Listen normally on 7 cilium_host 192.168.0.37:123 Jul 7 00:20:13.585172 ntpd[1542]: 7 Jul 00:20:13 ntpd[1542]: Listen normally on 8 cilium_net [fe80::d0ba:fff:fe33:b25b%4]:123 Jul 7 00:20:13.585172 ntpd[1542]: 7 Jul 00:20:13 ntpd[1542]: Listen normally on 9 cilium_host [fe80::d06e:5dff:fe44:5f37%5]:123 Jul 7 00:20:13.585172 ntpd[1542]: 7 Jul 00:20:13 ntpd[1542]: Listen normally on 10 cilium_vxlan [fe80::6820:c6ff:feeb:d25e%6]:123 Jul 7 00:20:13.585172 ntpd[1542]: 7 Jul 00:20:13 ntpd[1542]: Listen normally on 11 lxc_health [fe80::6845:6fff:fe5e:a0be%8]:123 Jul 7 00:20:13.585172 ntpd[1542]: 7 Jul 00:20:13 ntpd[1542]: Listen normally on 12 lxc0174cb4568ed [fe80::f8c2:c4ff:fe87:2033%10]:123 Jul 7 00:20:13.585172 ntpd[1542]: 7 Jul 00:20:13 ntpd[1542]: Listen normally on 13 lxc4fd4d0421650 [fe80::8c3e:39ff:fe42:8c50%12]:123 Jul 7 00:20:13.584223 ntpd[1542]: Listen normally on 8 cilium_net [fe80::d0ba:fff:fe33:b25b%4]:123 Jul 7 00:20:13.584298 ntpd[1542]: Listen normally on 9 cilium_host [fe80::d06e:5dff:fe44:5f37%5]:123 Jul 7 00:20:13.584351 ntpd[1542]: Listen normally on 10 cilium_vxlan [fe80::6820:c6ff:feeb:d25e%6]:123 Jul 7 00:20:13.584403 ntpd[1542]: Listen normally on 11 lxc_health [fe80::6845:6fff:fe5e:a0be%8]:123 Jul 7 00:20:13.584454 ntpd[1542]: Listen normally on 12 lxc0174cb4568ed [fe80::f8c2:c4ff:fe87:2033%10]:123 Jul 7 00:20:13.584505 ntpd[1542]: Listen normally on 13 lxc4fd4d0421650 [fe80::8c3e:39ff:fe42:8c50%12]:123 Jul 7 00:20:14.041266 containerd[1559]: time="2025-07-07T00:20:14.041076446Z" level=info msg="connecting to shim 6b9854c05b195d5d1a2e5a4282294c411d29cffc09b467c556f86b0b85b1470f" address="unix:///run/containerd/s/8d828f67c113007d14ed51b1aec546fa4d5e1ada7a98012dd4752fe6251bcbf1" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:20:14.075860 containerd[1559]: time="2025-07-07T00:20:14.075279631Z" level=info msg="connecting to shim 192372545e563add945bed3b5da3d8028cadbb2ae82f2a9e1be71cab64bf27e1" address="unix:///run/containerd/s/dd02e2f3865798a09cb4f71df44982a413264636ffa4afc8b7eb24b4aa7bc89f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:20:14.129510 systemd[1]: Started cri-containerd-6b9854c05b195d5d1a2e5a4282294c411d29cffc09b467c556f86b0b85b1470f.scope - libcontainer container 6b9854c05b195d5d1a2e5a4282294c411d29cffc09b467c556f86b0b85b1470f. Jul 7 00:20:14.146945 systemd[1]: Started cri-containerd-192372545e563add945bed3b5da3d8028cadbb2ae82f2a9e1be71cab64bf27e1.scope - libcontainer container 192372545e563add945bed3b5da3d8028cadbb2ae82f2a9e1be71cab64bf27e1. Jul 7 00:20:14.268010 containerd[1559]: time="2025-07-07T00:20:14.267950652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fmtb7,Uid:dd4181f7-a225-41d0-9176-569757f037b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b9854c05b195d5d1a2e5a4282294c411d29cffc09b467c556f86b0b85b1470f\"" Jul 7 00:20:14.273864 containerd[1559]: time="2025-07-07T00:20:14.272523412Z" level=info msg="CreateContainer within sandbox \"6b9854c05b195d5d1a2e5a4282294c411d29cffc09b467c556f86b0b85b1470f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:20:14.315204 containerd[1559]: time="2025-07-07T00:20:14.313099380Z" level=info msg="Container 65062117b35d57f47dc96c878aaa767a1f582c1c952385f96faa585454fb19ba: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:14.327193 containerd[1559]: time="2025-07-07T00:20:14.327141709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vnq42,Uid:07d60e40-b58a-43d5-9d82-13d8c11fbb2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"192372545e563add945bed3b5da3d8028cadbb2ae82f2a9e1be71cab64bf27e1\"" Jul 7 00:20:14.333321 containerd[1559]: time="2025-07-07T00:20:14.333228013Z" level=info msg="CreateContainer within sandbox \"192372545e563add945bed3b5da3d8028cadbb2ae82f2a9e1be71cab64bf27e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:20:14.337973 containerd[1559]: time="2025-07-07T00:20:14.337833290Z" level=info msg="CreateContainer within sandbox \"6b9854c05b195d5d1a2e5a4282294c411d29cffc09b467c556f86b0b85b1470f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65062117b35d57f47dc96c878aaa767a1f582c1c952385f96faa585454fb19ba\"" Jul 7 00:20:14.340431 containerd[1559]: time="2025-07-07T00:20:14.340360290Z" level=info msg="StartContainer for \"65062117b35d57f47dc96c878aaa767a1f582c1c952385f96faa585454fb19ba\"" Jul 7 00:20:14.343357 containerd[1559]: time="2025-07-07T00:20:14.343270664Z" level=info msg="connecting to shim 65062117b35d57f47dc96c878aaa767a1f582c1c952385f96faa585454fb19ba" address="unix:///run/containerd/s/8d828f67c113007d14ed51b1aec546fa4d5e1ada7a98012dd4752fe6251bcbf1" protocol=ttrpc version=3 Jul 7 00:20:14.348923 containerd[1559]: time="2025-07-07T00:20:14.348857086Z" level=info msg="Container ba0f1f8425879e53dd72f12ee50e973648ff4636b23b52a7fc2e333cde892b61: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:14.360597 containerd[1559]: time="2025-07-07T00:20:14.360538709Z" level=info msg="CreateContainer within sandbox \"192372545e563add945bed3b5da3d8028cadbb2ae82f2a9e1be71cab64bf27e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba0f1f8425879e53dd72f12ee50e973648ff4636b23b52a7fc2e333cde892b61\"" Jul 7 00:20:14.364776 containerd[1559]: time="2025-07-07T00:20:14.362739664Z" level=info msg="StartContainer for \"ba0f1f8425879e53dd72f12ee50e973648ff4636b23b52a7fc2e333cde892b61\"" Jul 7 00:20:14.367368 containerd[1559]: time="2025-07-07T00:20:14.367184361Z" level=info msg="connecting to shim ba0f1f8425879e53dd72f12ee50e973648ff4636b23b52a7fc2e333cde892b61" address="unix:///run/containerd/s/dd02e2f3865798a09cb4f71df44982a413264636ffa4afc8b7eb24b4aa7bc89f" protocol=ttrpc version=3 Jul 7 00:20:14.380214 systemd[1]: Started cri-containerd-65062117b35d57f47dc96c878aaa767a1f582c1c952385f96faa585454fb19ba.scope - libcontainer container 65062117b35d57f47dc96c878aaa767a1f582c1c952385f96faa585454fb19ba. Jul 7 00:20:14.422110 systemd[1]: Started cri-containerd-ba0f1f8425879e53dd72f12ee50e973648ff4636b23b52a7fc2e333cde892b61.scope - libcontainer container ba0f1f8425879e53dd72f12ee50e973648ff4636b23b52a7fc2e333cde892b61. Jul 7 00:20:14.464271 containerd[1559]: time="2025-07-07T00:20:14.464110997Z" level=info msg="StartContainer for \"65062117b35d57f47dc96c878aaa767a1f582c1c952385f96faa585454fb19ba\" returns successfully" Jul 7 00:20:14.502047 containerd[1559]: time="2025-07-07T00:20:14.501973426Z" level=info msg="StartContainer for \"ba0f1f8425879e53dd72f12ee50e973648ff4636b23b52a7fc2e333cde892b61\" returns successfully" Jul 7 00:20:14.962246 kubelet[2792]: I0707 00:20:14.961558 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vnq42" podStartSLOduration=26.961531938 podStartE2EDuration="26.961531938s" podCreationTimestamp="2025-07-07 00:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:20:14.959264327 +0000 UTC m=+31.592534523" watchObservedRunningTime="2025-07-07 00:20:14.961531938 +0000 UTC m=+31.594802134" Jul 7 00:20:14.994656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055409746.mount: Deactivated successfully. Jul 7 00:20:39.634098 systemd[1]: Started sshd@9-10.128.0.28:22-139.178.68.195:50080.service - OpenSSH per-connection server daemon (139.178.68.195:50080). Jul 7 00:20:39.947486 sshd[4121]: Accepted publickey for core from 139.178.68.195 port 50080 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:20:39.952300 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:20:39.961221 systemd-logind[1548]: New session 10 of user core. Jul 7 00:20:39.971724 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:20:40.276837 sshd[4123]: Connection closed by 139.178.68.195 port 50080 Jul 7 00:20:40.277988 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Jul 7 00:20:40.284734 systemd[1]: sshd@9-10.128.0.28:22-139.178.68.195:50080.service: Deactivated successfully. Jul 7 00:20:40.288092 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:20:40.289906 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:20:40.292696 systemd-logind[1548]: Removed session 10. Jul 7 00:20:45.332263 systemd[1]: Started sshd@10-10.128.0.28:22-139.178.68.195:50092.service - OpenSSH per-connection server daemon (139.178.68.195:50092). Jul 7 00:20:45.642782 sshd[4143]: Accepted publickey for core from 139.178.68.195 port 50092 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:20:45.644757 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:20:45.651929 systemd-logind[1548]: New session 11 of user core. Jul 7 00:20:45.657095 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:20:45.935135 sshd[4145]: Connection closed by 139.178.68.195 port 50092 Jul 7 00:20:45.936298 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Jul 7 00:20:45.942513 systemd[1]: sshd@10-10.128.0.28:22-139.178.68.195:50092.service: Deactivated successfully. Jul 7 00:20:45.945291 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:20:45.946738 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:20:45.948881 systemd-logind[1548]: Removed session 11. Jul 7 00:20:51.002133 systemd[1]: Started sshd@11-10.128.0.28:22-139.178.68.195:47682.service - OpenSSH per-connection server daemon (139.178.68.195:47682). Jul 7 00:20:51.321648 sshd[4160]: Accepted publickey for core from 139.178.68.195 port 47682 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:20:51.323485 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:20:51.330965 systemd-logind[1548]: New session 12 of user core. Jul 7 00:20:51.336114 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:20:51.620539 sshd[4162]: Connection closed by 139.178.68.195 port 47682 Jul 7 00:20:51.621519 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Jul 7 00:20:51.627548 systemd[1]: sshd@11-10.128.0.28:22-139.178.68.195:47682.service: Deactivated successfully. Jul 7 00:20:51.630842 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:20:51.632707 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:20:51.635259 systemd-logind[1548]: Removed session 12. Jul 7 00:20:56.680194 systemd[1]: Started sshd@12-10.128.0.28:22-139.178.68.195:47688.service - OpenSSH per-connection server daemon (139.178.68.195:47688). Jul 7 00:20:56.998344 sshd[4175]: Accepted publickey for core from 139.178.68.195 port 47688 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:20:57.000226 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:20:57.007900 systemd-logind[1548]: New session 13 of user core. Jul 7 00:20:57.015275 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:20:57.305472 sshd[4177]: Connection closed by 139.178.68.195 port 47688 Jul 7 00:20:57.306784 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Jul 7 00:20:57.311486 systemd[1]: sshd@12-10.128.0.28:22-139.178.68.195:47688.service: Deactivated successfully. Jul 7 00:20:57.314994 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:20:57.318357 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:20:57.320427 systemd-logind[1548]: Removed session 13. Jul 7 00:20:57.363941 systemd[1]: Started sshd@13-10.128.0.28:22-139.178.68.195:47690.service - OpenSSH per-connection server daemon (139.178.68.195:47690). Jul 7 00:20:57.681846 sshd[4190]: Accepted publickey for core from 139.178.68.195 port 47690 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:20:57.685344 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:20:57.693171 systemd-logind[1548]: New session 14 of user core. Jul 7 00:20:57.696053 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:20:58.013980 sshd[4192]: Connection closed by 139.178.68.195 port 47690 Jul 7 00:20:58.015173 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jul 7 00:20:58.026571 systemd[1]: sshd@13-10.128.0.28:22-139.178.68.195:47690.service: Deactivated successfully. Jul 7 00:20:58.029842 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:20:58.031294 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:20:58.034594 systemd-logind[1548]: Removed session 14. Jul 7 00:20:58.073740 systemd[1]: Started sshd@14-10.128.0.28:22-139.178.68.195:47696.service - OpenSSH per-connection server daemon (139.178.68.195:47696). Jul 7 00:20:58.396503 sshd[4201]: Accepted publickey for core from 139.178.68.195 port 47696 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:20:58.398426 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:20:58.405880 systemd-logind[1548]: New session 15 of user core. Jul 7 00:20:58.412125 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:20:58.698866 sshd[4203]: Connection closed by 139.178.68.195 port 47696 Jul 7 00:20:58.699701 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jul 7 00:20:58.705630 systemd[1]: sshd@14-10.128.0.28:22-139.178.68.195:47696.service: Deactivated successfully. Jul 7 00:20:58.708630 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:20:58.710071 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:20:58.712479 systemd-logind[1548]: Removed session 15. Jul 7 00:21:03.750690 systemd[1]: Started sshd@15-10.128.0.28:22-139.178.68.195:42094.service - OpenSSH per-connection server daemon (139.178.68.195:42094). Jul 7 00:21:04.057638 sshd[4215]: Accepted publickey for core from 139.178.68.195 port 42094 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:04.059582 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:04.067443 systemd-logind[1548]: New session 16 of user core. Jul 7 00:21:04.076133 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:21:04.354830 sshd[4217]: Connection closed by 139.178.68.195 port 42094 Jul 7 00:21:04.355826 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:04.362183 systemd[1]: sshd@15-10.128.0.28:22-139.178.68.195:42094.service: Deactivated successfully. Jul 7 00:21:04.365473 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:21:04.367022 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:21:04.369875 systemd-logind[1548]: Removed session 16. Jul 7 00:21:09.411057 systemd[1]: Started sshd@16-10.128.0.28:22-139.178.68.195:52102.service - OpenSSH per-connection server daemon (139.178.68.195:52102). Jul 7 00:21:09.717216 sshd[4229]: Accepted publickey for core from 139.178.68.195 port 52102 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:09.719184 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:09.726247 systemd-logind[1548]: New session 17 of user core. Jul 7 00:21:09.732239 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:21:10.010089 sshd[4232]: Connection closed by 139.178.68.195 port 52102 Jul 7 00:21:10.011399 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:10.017486 systemd[1]: sshd@16-10.128.0.28:22-139.178.68.195:52102.service: Deactivated successfully. Jul 7 00:21:10.020473 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:21:10.022364 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:21:10.025005 systemd-logind[1548]: Removed session 17. Jul 7 00:21:15.070913 systemd[1]: Started sshd@17-10.128.0.28:22-139.178.68.195:52104.service - OpenSSH per-connection server daemon (139.178.68.195:52104). Jul 7 00:21:15.389986 sshd[4244]: Accepted publickey for core from 139.178.68.195 port 52104 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:15.391957 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:15.399887 systemd-logind[1548]: New session 18 of user core. Jul 7 00:21:15.408174 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:21:15.683293 sshd[4246]: Connection closed by 139.178.68.195 port 52104 Jul 7 00:21:15.685637 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:15.692326 systemd[1]: sshd@17-10.128.0.28:22-139.178.68.195:52104.service: Deactivated successfully. Jul 7 00:21:15.695526 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:21:15.697587 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:21:15.701146 systemd-logind[1548]: Removed session 18. Jul 7 00:21:15.745561 systemd[1]: Started sshd@18-10.128.0.28:22-139.178.68.195:52118.service - OpenSSH per-connection server daemon (139.178.68.195:52118). Jul 7 00:21:16.060332 sshd[4260]: Accepted publickey for core from 139.178.68.195 port 52118 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:16.062303 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:16.069980 systemd-logind[1548]: New session 19 of user core. Jul 7 00:21:16.073116 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:21:16.413096 sshd[4262]: Connection closed by 139.178.68.195 port 52118 Jul 7 00:21:16.414138 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:16.420742 systemd[1]: sshd@18-10.128.0.28:22-139.178.68.195:52118.service: Deactivated successfully. Jul 7 00:21:16.424298 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:21:16.425907 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:21:16.428656 systemd-logind[1548]: Removed session 19. Jul 7 00:21:16.469594 systemd[1]: Started sshd@19-10.128.0.28:22-139.178.68.195:52132.service - OpenSSH per-connection server daemon (139.178.68.195:52132). Jul 7 00:21:16.796009 sshd[4272]: Accepted publickey for core from 139.178.68.195 port 52132 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:16.798121 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:16.806611 systemd-logind[1548]: New session 20 of user core. Jul 7 00:21:16.814202 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:21:17.796397 sshd[4274]: Connection closed by 139.178.68.195 port 52132 Jul 7 00:21:17.796941 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:17.807770 systemd[1]: sshd@19-10.128.0.28:22-139.178.68.195:52132.service: Deactivated successfully. Jul 7 00:21:17.812897 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:21:17.814609 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:21:17.818710 systemd-logind[1548]: Removed session 20. Jul 7 00:21:17.856157 systemd[1]: Started sshd@20-10.128.0.28:22-139.178.68.195:52148.service - OpenSSH per-connection server daemon (139.178.68.195:52148). Jul 7 00:21:18.168429 sshd[4292]: Accepted publickey for core from 139.178.68.195 port 52148 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:18.171302 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:18.180890 systemd-logind[1548]: New session 21 of user core. Jul 7 00:21:18.197131 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:21:18.601106 sshd[4294]: Connection closed by 139.178.68.195 port 52148 Jul 7 00:21:18.602072 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:18.608529 systemd[1]: sshd@20-10.128.0.28:22-139.178.68.195:52148.service: Deactivated successfully. Jul 7 00:21:18.611619 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:21:18.613302 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:21:18.616135 systemd-logind[1548]: Removed session 21. Jul 7 00:21:18.658541 systemd[1]: Started sshd@21-10.128.0.28:22-139.178.68.195:53984.service - OpenSSH per-connection server daemon (139.178.68.195:53984). Jul 7 00:21:18.974251 sshd[4304]: Accepted publickey for core from 139.178.68.195 port 53984 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:18.976427 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:18.983983 systemd-logind[1548]: New session 22 of user core. Jul 7 00:21:18.991124 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:21:19.265248 sshd[4306]: Connection closed by 139.178.68.195 port 53984 Jul 7 00:21:19.266161 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:19.274624 systemd[1]: sshd@21-10.128.0.28:22-139.178.68.195:53984.service: Deactivated successfully. Jul 7 00:21:19.277620 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:21:19.279454 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:21:19.281732 systemd-logind[1548]: Removed session 22. Jul 7 00:21:24.323320 systemd[1]: Started sshd@22-10.128.0.28:22-139.178.68.195:53996.service - OpenSSH per-connection server daemon (139.178.68.195:53996). Jul 7 00:21:24.642724 sshd[4321]: Accepted publickey for core from 139.178.68.195 port 53996 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:24.644833 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:24.652378 systemd-logind[1548]: New session 23 of user core. Jul 7 00:21:24.658045 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:21:24.935638 sshd[4325]: Connection closed by 139.178.68.195 port 53996 Jul 7 00:21:24.936159 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:24.941619 systemd[1]: sshd@22-10.128.0.28:22-139.178.68.195:53996.service: Deactivated successfully. Jul 7 00:21:24.946092 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:21:24.950395 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:21:24.953466 systemd-logind[1548]: Removed session 23. Jul 7 00:21:29.988106 systemd[1]: Started sshd@23-10.128.0.28:22-139.178.68.195:39518.service - OpenSSH per-connection server daemon (139.178.68.195:39518). Jul 7 00:21:30.301675 sshd[4337]: Accepted publickey for core from 139.178.68.195 port 39518 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:30.303784 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:30.311585 systemd-logind[1548]: New session 24 of user core. Jul 7 00:21:30.316090 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:21:30.593428 sshd[4339]: Connection closed by 139.178.68.195 port 39518 Jul 7 00:21:30.594630 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:30.601133 systemd[1]: sshd@23-10.128.0.28:22-139.178.68.195:39518.service: Deactivated successfully. Jul 7 00:21:30.603909 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:21:30.605547 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:21:30.608079 systemd-logind[1548]: Removed session 24. Jul 7 00:21:35.650123 systemd[1]: Started sshd@24-10.128.0.28:22-139.178.68.195:39520.service - OpenSSH per-connection server daemon (139.178.68.195:39520). Jul 7 00:21:35.978234 sshd[4351]: Accepted publickey for core from 139.178.68.195 port 39520 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:35.980136 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:35.990006 systemd-logind[1548]: New session 25 of user core. Jul 7 00:21:35.996102 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:21:36.277330 sshd[4354]: Connection closed by 139.178.68.195 port 39520 Jul 7 00:21:36.278660 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:36.285145 systemd[1]: sshd@24-10.128.0.28:22-139.178.68.195:39520.service: Deactivated successfully. Jul 7 00:21:36.288448 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:21:36.290444 systemd-logind[1548]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:21:36.292640 systemd-logind[1548]: Removed session 25. Jul 7 00:21:36.331965 systemd[1]: Started sshd@25-10.128.0.28:22-139.178.68.195:39530.service - OpenSSH per-connection server daemon (139.178.68.195:39530). Jul 7 00:21:36.655373 sshd[4367]: Accepted publickey for core from 139.178.68.195 port 39530 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:36.657394 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:36.664881 systemd-logind[1548]: New session 26 of user core. Jul 7 00:21:36.673057 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:21:38.964855 kubelet[2792]: I0707 00:21:38.963616 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fmtb7" podStartSLOduration=110.96358882 podStartE2EDuration="1m50.96358882s" podCreationTimestamp="2025-07-07 00:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:20:15.028148399 +0000 UTC m=+31.661418598" watchObservedRunningTime="2025-07-07 00:21:38.96358882 +0000 UTC m=+115.596859023" Jul 7 00:21:39.000734 containerd[1559]: time="2025-07-07T00:21:39.000301190Z" level=info msg="StopContainer for \"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\" with timeout 30 (s)" Jul 7 00:21:39.008237 containerd[1559]: time="2025-07-07T00:21:39.008070116Z" level=info msg="Stop container \"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\" with signal terminated" Jul 7 00:21:39.047894 containerd[1559]: time="2025-07-07T00:21:39.047828036Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:21:39.053853 systemd[1]: cri-containerd-c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89.scope: Deactivated successfully. Jul 7 00:21:39.058543 containerd[1559]: time="2025-07-07T00:21:39.058367298Z" level=info msg="received exit event container_id:\"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\" id:\"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\" pid:3365 exited_at:{seconds:1751847699 nanos:56064843}" Jul 7 00:21:39.059189 containerd[1559]: time="2025-07-07T00:21:39.059148720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\" id:\"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\" pid:3365 exited_at:{seconds:1751847699 nanos:56064843}" Jul 7 00:21:39.061906 containerd[1559]: time="2025-07-07T00:21:39.061767956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" id:\"8acf22ceaa861a7a20f752ebd9d6ad057fb9fe5fd5bcf772ce4480b0673fef78\" pid:4387 exited_at:{seconds:1751847699 nanos:61076072}" Jul 7 00:21:39.068757 containerd[1559]: time="2025-07-07T00:21:39.068709255Z" level=info msg="StopContainer for \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" with timeout 2 (s)" Jul 7 00:21:39.070373 containerd[1559]: time="2025-07-07T00:21:39.070246902Z" level=info msg="Stop container \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" with signal terminated" Jul 7 00:21:39.089241 systemd-networkd[1473]: lxc_health: Link DOWN Jul 7 00:21:39.089253 systemd-networkd[1473]: lxc_health: Lost carrier Jul 7 00:21:39.115912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89-rootfs.mount: Deactivated successfully. Jul 7 00:21:39.119708 systemd[1]: cri-containerd-1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f.scope: Deactivated successfully. Jul 7 00:21:39.120352 systemd[1]: cri-containerd-1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f.scope: Consumed 9.859s CPU time, 125.6M memory peak, 128K read from disk, 13.3M written to disk. Jul 7 00:21:39.124307 containerd[1559]: time="2025-07-07T00:21:39.123961071Z" level=info msg="received exit event container_id:\"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" id:\"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" pid:3443 exited_at:{seconds:1751847699 nanos:123200056}" Jul 7 00:21:39.125323 containerd[1559]: time="2025-07-07T00:21:39.124770729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" id:\"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" pid:3443 exited_at:{seconds:1751847699 nanos:123200056}" Jul 7 00:21:39.142903 containerd[1559]: time="2025-07-07T00:21:39.142835031Z" level=info msg="StopContainer for \"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\" returns successfully" Jul 7 00:21:39.144700 containerd[1559]: time="2025-07-07T00:21:39.144457808Z" level=info msg="StopPodSandbox for \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\"" Jul 7 00:21:39.144700 containerd[1559]: time="2025-07-07T00:21:39.144565270Z" level=info msg="Container to stop \"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:21:39.166155 systemd[1]: cri-containerd-4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83.scope: Deactivated successfully. Jul 7 00:21:39.179051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f-rootfs.mount: Deactivated successfully. Jul 7 00:21:39.181393 containerd[1559]: time="2025-07-07T00:21:39.180926439Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\" id:\"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\" pid:2998 exit_status:137 exited_at:{seconds:1751847699 nanos:176341217}" Jul 7 00:21:39.195252 containerd[1559]: time="2025-07-07T00:21:39.195186567Z" level=info msg="StopContainer for \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" returns successfully" Jul 7 00:21:39.196198 containerd[1559]: time="2025-07-07T00:21:39.195874603Z" level=info msg="StopPodSandbox for \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\"" Jul 7 00:21:39.196198 containerd[1559]: time="2025-07-07T00:21:39.195958634Z" level=info msg="Container to stop \"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:21:39.196198 containerd[1559]: time="2025-07-07T00:21:39.195976065Z" level=info msg="Container to stop \"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:21:39.196198 containerd[1559]: time="2025-07-07T00:21:39.195990950Z" level=info msg="Container to stop \"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:21:39.196198 containerd[1559]: time="2025-07-07T00:21:39.196006498Z" level=info msg="Container to stop \"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:21:39.196198 containerd[1559]: time="2025-07-07T00:21:39.196020994Z" level=info msg="Container to stop \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:21:39.213569 systemd[1]: cri-containerd-b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504.scope: Deactivated successfully. Jul 7 00:21:39.249918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83-rootfs.mount: Deactivated successfully. Jul 7 00:21:39.255089 containerd[1559]: time="2025-07-07T00:21:39.255029052Z" level=info msg="shim disconnected" id=4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83 namespace=k8s.io Jul 7 00:21:39.255702 containerd[1559]: time="2025-07-07T00:21:39.255658436Z" level=warning msg="cleaning up after shim disconnected" id=4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83 namespace=k8s.io Jul 7 00:21:39.255702 containerd[1559]: time="2025-07-07T00:21:39.255726063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:21:39.270513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504-rootfs.mount: Deactivated successfully. Jul 7 00:21:39.273175 containerd[1559]: time="2025-07-07T00:21:39.272890883Z" level=info msg="shim disconnected" id=b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504 namespace=k8s.io Jul 7 00:21:39.273175 containerd[1559]: time="2025-07-07T00:21:39.272937921Z" level=warning msg="cleaning up after shim disconnected" id=b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504 namespace=k8s.io Jul 7 00:21:39.273175 containerd[1559]: time="2025-07-07T00:21:39.272951126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:21:39.286867 containerd[1559]: time="2025-07-07T00:21:39.285900619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" id:\"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" pid:3039 exit_status:137 exited_at:{seconds:1751847699 nanos:215698699}" Jul 7 00:21:39.286867 containerd[1559]: time="2025-07-07T00:21:39.286129156Z" level=info msg="received exit event sandbox_id:\"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" exit_status:137 exited_at:{seconds:1751847699 nanos:215698699}" Jul 7 00:21:39.288958 containerd[1559]: time="2025-07-07T00:21:39.288916766Z" level=info msg="received exit event sandbox_id:\"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\" exit_status:137 exited_at:{seconds:1751847699 nanos:176341217}" Jul 7 00:21:39.290020 containerd[1559]: time="2025-07-07T00:21:39.289985253Z" level=info msg="TearDown network for sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" successfully" Jul 7 00:21:39.291884 containerd[1559]: time="2025-07-07T00:21:39.291849784Z" level=info msg="StopPodSandbox for \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" returns successfully" Jul 7 00:21:39.292060 containerd[1559]: time="2025-07-07T00:21:39.290079011Z" level=info msg="TearDown network for sandbox \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\" successfully" Jul 7 00:21:39.292154 containerd[1559]: time="2025-07-07T00:21:39.292135362Z" level=info msg="StopPodSandbox for \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\" returns successfully" Jul 7 00:21:39.294517 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83-shm.mount: Deactivated successfully. Jul 7 00:21:39.470455 kubelet[2792]: I0707 00:21:39.470354 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-host-proc-sys-kernel\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.470455 kubelet[2792]: I0707 00:21:39.470422 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-etc-cni-netd\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.470455 kubelet[2792]: I0707 00:21:39.470453 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cni-path\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.470858 kubelet[2792]: I0707 00:21:39.470488 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe3baa84-9318-4e77-9d2f-8abe63724c57-hubble-tls\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.470858 kubelet[2792]: I0707 00:21:39.470559 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-run\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.470858 kubelet[2792]: I0707 00:21:39.470589 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-xtables-lock\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.470858 kubelet[2792]: I0707 00:21:39.470617 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-cgroup\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.470858 kubelet[2792]: I0707 00:21:39.470648 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grvfn\" (UniqueName: \"kubernetes.io/projected/fe3baa84-9318-4e77-9d2f-8abe63724c57-kube-api-access-grvfn\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.470858 kubelet[2792]: I0707 00:21:39.470680 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-config-path\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.471162 kubelet[2792]: I0707 00:21:39.470709 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d42x2\" (UniqueName: \"kubernetes.io/projected/843880a1-7803-4479-97f7-690f1e2791e4-kube-api-access-d42x2\") pod \"843880a1-7803-4479-97f7-690f1e2791e4\" (UID: \"843880a1-7803-4479-97f7-690f1e2791e4\") " Jul 7 00:21:39.471162 kubelet[2792]: I0707 00:21:39.470739 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-bpf-maps\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.471162 kubelet[2792]: I0707 00:21:39.470763 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-lib-modules\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.471162 kubelet[2792]: I0707 00:21:39.470790 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-host-proc-sys-net\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.471162 kubelet[2792]: I0707 00:21:39.470860 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/843880a1-7803-4479-97f7-690f1e2791e4-cilium-config-path\") pod \"843880a1-7803-4479-97f7-690f1e2791e4\" (UID: \"843880a1-7803-4479-97f7-690f1e2791e4\") " Jul 7 00:21:39.471162 kubelet[2792]: I0707 00:21:39.470891 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe3baa84-9318-4e77-9d2f-8abe63724c57-clustermesh-secrets\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.471450 kubelet[2792]: I0707 00:21:39.470926 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-hostproc\") pod \"fe3baa84-9318-4e77-9d2f-8abe63724c57\" (UID: \"fe3baa84-9318-4e77-9d2f-8abe63724c57\") " Jul 7 00:21:39.471450 kubelet[2792]: I0707 00:21:39.471011 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-hostproc" (OuterVolumeSpecName: "hostproc") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:21:39.471450 kubelet[2792]: I0707 00:21:39.471070 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:21:39.471450 kubelet[2792]: I0707 00:21:39.471096 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cni-path" (OuterVolumeSpecName: "cni-path") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:21:39.473099 kubelet[2792]: I0707 00:21:39.471533 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:21:39.473099 kubelet[2792]: I0707 00:21:39.471852 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:21:39.473099 kubelet[2792]: I0707 00:21:39.471929 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:21:39.473099 kubelet[2792]: I0707 00:21:39.471957 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:21:39.476081 kubelet[2792]: I0707 00:21:39.475982 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:21:39.476256 kubelet[2792]: I0707 00:21:39.476155 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:21:39.476256 kubelet[2792]: I0707 00:21:39.476208 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:21:39.479821 kubelet[2792]: I0707 00:21:39.479128 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3baa84-9318-4e77-9d2f-8abe63724c57-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:21:39.483569 kubelet[2792]: I0707 00:21:39.483509 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843880a1-7803-4479-97f7-690f1e2791e4-kube-api-access-d42x2" (OuterVolumeSpecName: "kube-api-access-d42x2") pod "843880a1-7803-4479-97f7-690f1e2791e4" (UID: "843880a1-7803-4479-97f7-690f1e2791e4"). InnerVolumeSpecName "kube-api-access-d42x2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:21:39.483969 kubelet[2792]: I0707 00:21:39.483935 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/843880a1-7803-4479-97f7-690f1e2791e4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "843880a1-7803-4479-97f7-690f1e2791e4" (UID: "843880a1-7803-4479-97f7-690f1e2791e4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:21:39.484736 kubelet[2792]: I0707 00:21:39.484696 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3baa84-9318-4e77-9d2f-8abe63724c57-kube-api-access-grvfn" (OuterVolumeSpecName: "kube-api-access-grvfn") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "kube-api-access-grvfn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:21:39.486052 kubelet[2792]: I0707 00:21:39.485980 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:21:39.486518 kubelet[2792]: I0707 00:21:39.486475 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3baa84-9318-4e77-9d2f-8abe63724c57-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fe3baa84-9318-4e77-9d2f-8abe63724c57" (UID: "fe3baa84-9318-4e77-9d2f-8abe63724c57"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:21:39.571937 kubelet[2792]: I0707 00:21:39.571719 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-host-proc-sys-kernel\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.571937 kubelet[2792]: I0707 00:21:39.571790 2792 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-etc-cni-netd\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.571937 kubelet[2792]: I0707 00:21:39.571840 2792 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cni-path\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.571937 kubelet[2792]: I0707 00:21:39.571856 2792 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe3baa84-9318-4e77-9d2f-8abe63724c57-hubble-tls\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.571937 kubelet[2792]: I0707 00:21:39.571871 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-run\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.571937 kubelet[2792]: I0707 00:21:39.571885 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grvfn\" (UniqueName: \"kubernetes.io/projected/fe3baa84-9318-4e77-9d2f-8abe63724c57-kube-api-access-grvfn\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.571937 kubelet[2792]: I0707 00:21:39.571907 2792 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-xtables-lock\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.572467 kubelet[2792]: I0707 00:21:39.571923 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-cgroup\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.572467 kubelet[2792]: I0707 00:21:39.571939 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe3baa84-9318-4e77-9d2f-8abe63724c57-cilium-config-path\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.572467 kubelet[2792]: I0707 00:21:39.571960 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d42x2\" (UniqueName: \"kubernetes.io/projected/843880a1-7803-4479-97f7-690f1e2791e4-kube-api-access-d42x2\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.572467 kubelet[2792]: I0707 00:21:39.571976 2792 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-bpf-maps\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.572467 kubelet[2792]: I0707 00:21:39.571991 2792 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-lib-modules\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.572467 kubelet[2792]: I0707 00:21:39.572006 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-host-proc-sys-net\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.572467 kubelet[2792]: I0707 00:21:39.572021 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/843880a1-7803-4479-97f7-690f1e2791e4-cilium-config-path\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.572864 kubelet[2792]: I0707 00:21:39.572036 2792 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe3baa84-9318-4e77-9d2f-8abe63724c57-clustermesh-secrets\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.572864 kubelet[2792]: I0707 00:21:39.572052 2792 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe3baa84-9318-4e77-9d2f-8abe63724c57-hostproc\") on node \"ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:21:39.709123 systemd[1]: Removed slice kubepods-besteffort-pod843880a1_7803_4479_97f7_690f1e2791e4.slice - libcontainer container kubepods-besteffort-pod843880a1_7803_4479_97f7_690f1e2791e4.slice. Jul 7 00:21:39.714049 systemd[1]: Removed slice kubepods-burstable-podfe3baa84_9318_4e77_9d2f_8abe63724c57.slice - libcontainer container kubepods-burstable-podfe3baa84_9318_4e77_9d2f_8abe63724c57.slice. Jul 7 00:21:39.714359 systemd[1]: kubepods-burstable-podfe3baa84_9318_4e77_9d2f_8abe63724c57.slice: Consumed 10.035s CPU time, 126.1M memory peak, 128K read from disk, 13.3M written to disk. Jul 7 00:21:40.114699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504-shm.mount: Deactivated successfully. Jul 7 00:21:40.115944 systemd[1]: var-lib-kubelet-pods-843880a1\x2d7803\x2d4479\x2d97f7\x2d690f1e2791e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd42x2.mount: Deactivated successfully. Jul 7 00:21:40.116077 systemd[1]: var-lib-kubelet-pods-fe3baa84\x2d9318\x2d4e77\x2d9d2f\x2d8abe63724c57-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgrvfn.mount: Deactivated successfully. Jul 7 00:21:40.116186 systemd[1]: var-lib-kubelet-pods-fe3baa84\x2d9318\x2d4e77\x2d9d2f\x2d8abe63724c57-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 00:21:40.116300 systemd[1]: var-lib-kubelet-pods-fe3baa84\x2d9318\x2d4e77\x2d9d2f\x2d8abe63724c57-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 00:21:40.179933 kubelet[2792]: I0707 00:21:40.179533 2792 scope.go:117] "RemoveContainer" containerID="1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f" Jul 7 00:21:40.191434 containerd[1559]: time="2025-07-07T00:21:40.190260401Z" level=info msg="RemoveContainer for \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\"" Jul 7 00:21:40.207922 containerd[1559]: time="2025-07-07T00:21:40.207662943Z" level=info msg="RemoveContainer for \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" returns successfully" Jul 7 00:21:40.213512 kubelet[2792]: I0707 00:21:40.213451 2792 scope.go:117] "RemoveContainer" containerID="2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a" Jul 7 00:21:40.225389 containerd[1559]: time="2025-07-07T00:21:40.225328800Z" level=info msg="RemoveContainer for \"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\"" Jul 7 00:21:40.237035 containerd[1559]: time="2025-07-07T00:21:40.236892866Z" level=info msg="RemoveContainer for \"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\" returns successfully" Jul 7 00:21:40.239152 kubelet[2792]: I0707 00:21:40.237458 2792 scope.go:117] "RemoveContainer" containerID="12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5" Jul 7 00:21:40.247128 containerd[1559]: time="2025-07-07T00:21:40.247064048Z" level=info msg="RemoveContainer for \"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\"" Jul 7 00:21:40.256723 containerd[1559]: time="2025-07-07T00:21:40.256551917Z" level=info msg="RemoveContainer for \"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\" returns successfully" Jul 7 00:21:40.257064 kubelet[2792]: I0707 00:21:40.256976 2792 scope.go:117] "RemoveContainer" containerID="166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800" Jul 7 00:21:40.261308 containerd[1559]: time="2025-07-07T00:21:40.261252601Z" level=info msg="RemoveContainer for \"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\"" Jul 7 00:21:40.271842 containerd[1559]: time="2025-07-07T00:21:40.270509412Z" level=info msg="RemoveContainer for \"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\" returns successfully" Jul 7 00:21:40.274243 kubelet[2792]: I0707 00:21:40.274083 2792 scope.go:117] "RemoveContainer" containerID="c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5" Jul 7 00:21:40.279422 containerd[1559]: time="2025-07-07T00:21:40.279357141Z" level=info msg="RemoveContainer for \"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\"" Jul 7 00:21:40.287284 containerd[1559]: time="2025-07-07T00:21:40.287199677Z" level=info msg="RemoveContainer for \"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\" returns successfully" Jul 7 00:21:40.287644 kubelet[2792]: I0707 00:21:40.287594 2792 scope.go:117] "RemoveContainer" containerID="1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f" Jul 7 00:21:40.288110 containerd[1559]: time="2025-07-07T00:21:40.288025983Z" level=error msg="ContainerStatus for \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\": not found" Jul 7 00:21:40.288346 kubelet[2792]: E0707 00:21:40.288290 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\": not found" containerID="1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f" Jul 7 00:21:40.288475 kubelet[2792]: I0707 00:21:40.288338 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f"} err="failed to get container status \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1dd3408b20c9963ba6f3a28cefc8d47285a0d3a02818d02743889f06d202c06f\": not found" Jul 7 00:21:40.288475 kubelet[2792]: I0707 00:21:40.288478 2792 scope.go:117] "RemoveContainer" containerID="2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a" Jul 7 00:21:40.288922 containerd[1559]: time="2025-07-07T00:21:40.288873348Z" level=error msg="ContainerStatus for \"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\": not found" Jul 7 00:21:40.289160 kubelet[2792]: E0707 00:21:40.289102 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\": not found" containerID="2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a" Jul 7 00:21:40.289256 kubelet[2792]: I0707 00:21:40.289157 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a"} err="failed to get container status \"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2834de78dec113affda16ecac2e28f53962c1f20f2fd8c0ba8136a062c6e4e6a\": not found" Jul 7 00:21:40.289256 kubelet[2792]: I0707 00:21:40.289192 2792 scope.go:117] "RemoveContainer" containerID="12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5" Jul 7 00:21:40.289584 containerd[1559]: time="2025-07-07T00:21:40.289420211Z" level=error msg="ContainerStatus for \"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\": not found" Jul 7 00:21:40.289863 kubelet[2792]: E0707 00:21:40.289833 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\": not found" containerID="12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5" Jul 7 00:21:40.289964 kubelet[2792]: I0707 00:21:40.289892 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5"} err="failed to get container status \"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\": rpc error: code = NotFound desc = an error occurred when try to find container \"12e5836be34574ddf948edbcf8eb8fa2e87f0b0aab03518149685c3eda707fc5\": not found" Jul 7 00:21:40.289964 kubelet[2792]: I0707 00:21:40.289923 2792 scope.go:117] "RemoveContainer" containerID="166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800" Jul 7 00:21:40.290283 containerd[1559]: time="2025-07-07T00:21:40.290239312Z" level=error msg="ContainerStatus for \"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\": not found" Jul 7 00:21:40.290473 kubelet[2792]: E0707 00:21:40.290442 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\": not found" containerID="166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800" Jul 7 00:21:40.290542 kubelet[2792]: I0707 00:21:40.290479 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800"} err="failed to get container status \"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\": rpc error: code = NotFound desc = an error occurred when try to find container \"166edf91c00159c0bbb24e8ac66018760a0247d4625def61030599d5bf9c4800\": not found" Jul 7 00:21:40.290542 kubelet[2792]: I0707 00:21:40.290504 2792 scope.go:117] "RemoveContainer" containerID="c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5" Jul 7 00:21:40.290852 containerd[1559]: time="2025-07-07T00:21:40.290789275Z" level=error msg="ContainerStatus for \"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\": not found" Jul 7 00:21:40.291136 kubelet[2792]: E0707 00:21:40.291032 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\": not found" containerID="c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5" Jul 7 00:21:40.291136 kubelet[2792]: I0707 00:21:40.291072 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5"} err="failed to get container status \"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9943681c516d46a96b7c4a80dd6fd42a1f46a8274b2181b569bc3d4587ff4b5\": not found" Jul 7 00:21:40.291136 kubelet[2792]: I0707 00:21:40.291103 2792 scope.go:117] "RemoveContainer" containerID="c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89" Jul 7 00:21:40.293735 containerd[1559]: time="2025-07-07T00:21:40.293694080Z" level=info msg="RemoveContainer for \"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\"" Jul 7 00:21:40.300085 containerd[1559]: time="2025-07-07T00:21:40.300031381Z" level=info msg="RemoveContainer for \"c1af23d10f64a661f1c196ee8924dec526bd29fd3b3ddb6105fb6b3e732eaa89\" returns successfully" Jul 7 00:21:40.958892 sshd[4369]: Connection closed by 139.178.68.195 port 39530 Jul 7 00:21:40.958486 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:40.967252 systemd[1]: sshd@25-10.128.0.28:22-139.178.68.195:39530.service: Deactivated successfully. Jul 7 00:21:40.971097 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:21:40.971483 systemd[1]: session-26.scope: Consumed 1.517s CPU time, 23.9M memory peak. Jul 7 00:21:40.972895 systemd-logind[1548]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:21:40.975766 systemd-logind[1548]: Removed session 26. Jul 7 00:21:41.019318 systemd[1]: Started sshd@26-10.128.0.28:22-139.178.68.195:50422.service - OpenSSH per-connection server daemon (139.178.68.195:50422). Jul 7 00:21:41.338643 sshd[4521]: Accepted publickey for core from 139.178.68.195 port 50422 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:41.340734 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:41.348397 systemd-logind[1548]: New session 27 of user core. Jul 7 00:21:41.357404 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 00:21:41.584040 ntpd[1542]: Deleting interface #11 lxc_health, fe80::6845:6fff:fe5e:a0be%8#123, interface stats: received=0, sent=0, dropped=0, active_time=88 secs Jul 7 00:21:41.584617 ntpd[1542]: 7 Jul 00:21:41 ntpd[1542]: Deleting interface #11 lxc_health, fe80::6845:6fff:fe5e:a0be%8#123, interface stats: received=0, sent=0, dropped=0, active_time=88 secs Jul 7 00:21:41.702774 kubelet[2792]: I0707 00:21:41.702604 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="843880a1-7803-4479-97f7-690f1e2791e4" path="/var/lib/kubelet/pods/843880a1-7803-4479-97f7-690f1e2791e4/volumes" Jul 7 00:21:41.704251 kubelet[2792]: I0707 00:21:41.704202 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3baa84-9318-4e77-9d2f-8abe63724c57" path="/var/lib/kubelet/pods/fe3baa84-9318-4e77-9d2f-8abe63724c57/volumes" Jul 7 00:21:42.318788 kubelet[2792]: I0707 00:21:42.317678 2792 memory_manager.go:355] "RemoveStaleState removing state" podUID="fe3baa84-9318-4e77-9d2f-8abe63724c57" containerName="cilium-agent" Jul 7 00:21:42.319467 kubelet[2792]: I0707 00:21:42.319046 2792 memory_manager.go:355] "RemoveStaleState removing state" podUID="843880a1-7803-4479-97f7-690f1e2791e4" containerName="cilium-operator" Jul 7 00:21:42.322039 sshd[4524]: Connection closed by 139.178.68.195 port 50422 Jul 7 00:21:42.322998 sshd-session[4521]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:42.344744 systemd[1]: sshd@26-10.128.0.28:22-139.178.68.195:50422.service: Deactivated successfully. Jul 7 00:21:42.350699 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 00:21:42.355891 systemd-logind[1548]: Session 27 logged out. Waiting for processes to exit. Jul 7 00:21:42.390221 systemd[1]: Created slice kubepods-burstable-poddd9efa25_90ef_4db0_92d6_109aacc1cdda.slice - libcontainer container kubepods-burstable-poddd9efa25_90ef_4db0_92d6_109aacc1cdda.slice. Jul 7 00:21:42.396015 kubelet[2792]: I0707 00:21:42.395039 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd9efa25-90ef-4db0-92d6-109aacc1cdda-etc-cni-netd\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.395161 systemd[1]: Started sshd@27-10.128.0.28:22-139.178.68.195:50438.service - OpenSSH per-connection server daemon (139.178.68.195:50438). Jul 7 00:21:42.399443 kubelet[2792]: I0707 00:21:42.398910 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd9efa25-90ef-4db0-92d6-109aacc1cdda-cilium-run\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.399443 kubelet[2792]: I0707 00:21:42.398964 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd9efa25-90ef-4db0-92d6-109aacc1cdda-xtables-lock\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.399443 kubelet[2792]: I0707 00:21:42.398989 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dd9efa25-90ef-4db0-92d6-109aacc1cdda-cilium-ipsec-secrets\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.399443 kubelet[2792]: I0707 00:21:42.399014 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd9efa25-90ef-4db0-92d6-109aacc1cdda-host-proc-sys-net\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.399443 kubelet[2792]: I0707 00:21:42.399042 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd9efa25-90ef-4db0-92d6-109aacc1cdda-host-proc-sys-kernel\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.399854 kubelet[2792]: I0707 00:21:42.399069 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd9efa25-90ef-4db0-92d6-109aacc1cdda-cilium-config-path\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.399854 kubelet[2792]: I0707 00:21:42.399091 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd9efa25-90ef-4db0-92d6-109aacc1cdda-hubble-tls\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.399854 kubelet[2792]: I0707 00:21:42.399128 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd9efa25-90ef-4db0-92d6-109aacc1cdda-clustermesh-secrets\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.399854 kubelet[2792]: I0707 00:21:42.399156 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzpd\" (UniqueName: \"kubernetes.io/projected/dd9efa25-90ef-4db0-92d6-109aacc1cdda-kube-api-access-sxzpd\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.399854 kubelet[2792]: I0707 00:21:42.399191 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd9efa25-90ef-4db0-92d6-109aacc1cdda-bpf-maps\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.399854 kubelet[2792]: I0707 00:21:42.399218 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd9efa25-90ef-4db0-92d6-109aacc1cdda-hostproc\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.400191 kubelet[2792]: I0707 00:21:42.399246 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd9efa25-90ef-4db0-92d6-109aacc1cdda-cilium-cgroup\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.400191 kubelet[2792]: I0707 00:21:42.399277 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd9efa25-90ef-4db0-92d6-109aacc1cdda-cni-path\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.400191 kubelet[2792]: I0707 00:21:42.399303 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd9efa25-90ef-4db0-92d6-109aacc1cdda-lib-modules\") pod \"cilium-984wc\" (UID: \"dd9efa25-90ef-4db0-92d6-109aacc1cdda\") " pod="kube-system/cilium-984wc" Jul 7 00:21:42.401178 systemd-logind[1548]: Removed session 27. Jul 7 00:21:42.707763 containerd[1559]: time="2025-07-07T00:21:42.707685488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-984wc,Uid:dd9efa25-90ef-4db0-92d6-109aacc1cdda,Namespace:kube-system,Attempt:0,}" Jul 7 00:21:42.748137 containerd[1559]: time="2025-07-07T00:21:42.747924922Z" level=info msg="connecting to shim a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2" address="unix:///run/containerd/s/0ba64a785c3a2c65912de265287d7adf478ae834d1ece5174d285355fff11e2b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:21:42.753999 sshd[4534]: Accepted publickey for core from 139.178.68.195 port 50438 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:42.756842 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:42.767425 systemd-logind[1548]: New session 28 of user core. Jul 7 00:21:42.775431 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 00:21:42.791170 systemd[1]: Started cri-containerd-a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2.scope - libcontainer container a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2. Jul 7 00:21:42.832631 containerd[1559]: time="2025-07-07T00:21:42.832556699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-984wc,Uid:dd9efa25-90ef-4db0-92d6-109aacc1cdda,Namespace:kube-system,Attempt:0,} returns sandbox id \"a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2\"" Jul 7 00:21:42.838160 containerd[1559]: time="2025-07-07T00:21:42.838085332Z" level=info msg="CreateContainer within sandbox \"a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:21:42.848414 containerd[1559]: time="2025-07-07T00:21:42.848346455Z" level=info msg="Container 15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:21:42.858380 containerd[1559]: time="2025-07-07T00:21:42.858303385Z" level=info msg="CreateContainer within sandbox \"a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8\"" Jul 7 00:21:42.861326 containerd[1559]: time="2025-07-07T00:21:42.860626991Z" level=info msg="StartContainer for \"15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8\"" Jul 7 00:21:42.861912 containerd[1559]: time="2025-07-07T00:21:42.861876742Z" level=info msg="connecting to shim 15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8" address="unix:///run/containerd/s/0ba64a785c3a2c65912de265287d7adf478ae834d1ece5174d285355fff11e2b" protocol=ttrpc version=3 Jul 7 00:21:42.890081 systemd[1]: Started cri-containerd-15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8.scope - libcontainer container 15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8. Jul 7 00:21:42.936976 containerd[1559]: time="2025-07-07T00:21:42.936927174Z" level=info msg="StartContainer for \"15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8\" returns successfully" Jul 7 00:21:42.949527 systemd[1]: cri-containerd-15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8.scope: Deactivated successfully. Jul 7 00:21:42.954111 containerd[1559]: time="2025-07-07T00:21:42.954004635Z" level=info msg="received exit event container_id:\"15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8\" id:\"15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8\" pid:4597 exited_at:{seconds:1751847702 nanos:952355606}" Jul 7 00:21:42.954532 containerd[1559]: time="2025-07-07T00:21:42.954500883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8\" id:\"15f84836b7e77ae5f6f02dd6fb6400b4f474787b765f3a8277fc3d1d6938c6e8\" pid:4597 exited_at:{seconds:1751847702 nanos:952355606}" Jul 7 00:21:42.968892 sshd[4571]: Connection closed by 139.178.68.195 port 50438 Jul 7 00:21:42.969720 sshd-session[4534]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:42.981931 systemd[1]: sshd@27-10.128.0.28:22-139.178.68.195:50438.service: Deactivated successfully. Jul 7 00:21:42.985632 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 00:21:42.989872 systemd-logind[1548]: Session 28 logged out. Waiting for processes to exit. Jul 7 00:21:42.993649 systemd-logind[1548]: Removed session 28. Jul 7 00:21:43.021546 systemd[1]: Started sshd@28-10.128.0.28:22-139.178.68.195:50440.service - OpenSSH per-connection server daemon (139.178.68.195:50440). Jul 7 00:21:43.212538 containerd[1559]: time="2025-07-07T00:21:43.212484407Z" level=info msg="CreateContainer within sandbox \"a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:21:43.224932 containerd[1559]: time="2025-07-07T00:21:43.224268264Z" level=info msg="Container d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:21:43.236552 containerd[1559]: time="2025-07-07T00:21:43.235463779Z" level=info msg="CreateContainer within sandbox \"a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e\"" Jul 7 00:21:43.240412 containerd[1559]: time="2025-07-07T00:21:43.239679357Z" level=info msg="StartContainer for \"d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e\"" Jul 7 00:21:43.244067 containerd[1559]: time="2025-07-07T00:21:43.243875157Z" level=info msg="connecting to shim d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e" address="unix:///run/containerd/s/0ba64a785c3a2c65912de265287d7adf478ae834d1ece5174d285355fff11e2b" protocol=ttrpc version=3 Jul 7 00:21:43.274155 systemd[1]: Started cri-containerd-d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e.scope - libcontainer container d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e. Jul 7 00:21:43.334980 containerd[1559]: time="2025-07-07T00:21:43.334854645Z" level=info msg="StartContainer for \"d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e\" returns successfully" Jul 7 00:21:43.341849 systemd[1]: cri-containerd-d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e.scope: Deactivated successfully. Jul 7 00:21:43.342119 sshd[4635]: Accepted publickey for core from 139.178.68.195 port 50440 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:21:43.343617 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:43.348404 containerd[1559]: time="2025-07-07T00:21:43.348079410Z" level=info msg="received exit event container_id:\"d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e\" id:\"d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e\" pid:4649 exited_at:{seconds:1751847703 nanos:347403389}" Jul 7 00:21:43.349305 containerd[1559]: time="2025-07-07T00:21:43.348761303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e\" id:\"d2d46ab4a835fb3b69ebe9d2f1bef15eb412d03557d944fc8ea073bdee2d0f2e\" pid:4649 exited_at:{seconds:1751847703 nanos:347403389}" Jul 7 00:21:43.357901 systemd-logind[1548]: New session 29 of user core. Jul 7 00:21:43.366148 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 00:21:43.629450 containerd[1559]: time="2025-07-07T00:21:43.629360798Z" level=info msg="StopPodSandbox for \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\"" Jul 7 00:21:43.630125 containerd[1559]: time="2025-07-07T00:21:43.629544535Z" level=info msg="TearDown network for sandbox \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\" successfully" Jul 7 00:21:43.630125 containerd[1559]: time="2025-07-07T00:21:43.629565239Z" level=info msg="StopPodSandbox for \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\" returns successfully" Jul 7 00:21:43.630125 containerd[1559]: time="2025-07-07T00:21:43.630033598Z" level=info msg="RemovePodSandbox for \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\"" Jul 7 00:21:43.630125 containerd[1559]: time="2025-07-07T00:21:43.630067823Z" level=info msg="Forcibly stopping sandbox \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\"" Jul 7 00:21:43.630459 containerd[1559]: time="2025-07-07T00:21:43.630181967Z" level=info msg="TearDown network for sandbox \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\" successfully" Jul 7 00:21:43.632001 containerd[1559]: time="2025-07-07T00:21:43.631952000Z" level=info msg="Ensure that sandbox 4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83 in task-service has been cleanup successfully" Jul 7 00:21:43.637293 containerd[1559]: time="2025-07-07T00:21:43.637216375Z" level=info msg="RemovePodSandbox \"4e13d521c1652ad6425b4165e8522317e159cd6e80afc195bfdcf79e633b2a83\" returns successfully" Jul 7 00:21:43.638052 containerd[1559]: time="2025-07-07T00:21:43.638001053Z" level=info msg="StopPodSandbox for \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\"" Jul 7 00:21:43.638202 containerd[1559]: time="2025-07-07T00:21:43.638170422Z" level=info msg="TearDown network for sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" successfully" Jul 7 00:21:43.638202 containerd[1559]: time="2025-07-07T00:21:43.638197247Z" level=info msg="StopPodSandbox for \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" returns successfully" Jul 7 00:21:43.638910 containerd[1559]: time="2025-07-07T00:21:43.638781512Z" level=info msg="RemovePodSandbox for \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\"" Jul 7 00:21:43.638910 containerd[1559]: time="2025-07-07T00:21:43.638885839Z" level=info msg="Forcibly stopping sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\"" Jul 7 00:21:43.639073 containerd[1559]: time="2025-07-07T00:21:43.639004269Z" level=info msg="TearDown network for sandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" successfully" Jul 7 00:21:43.640801 containerd[1559]: time="2025-07-07T00:21:43.640764555Z" level=info msg="Ensure that sandbox b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504 in task-service has been cleanup successfully" Jul 7 00:21:43.646234 containerd[1559]: time="2025-07-07T00:21:43.646149822Z" level=info msg="RemovePodSandbox \"b46707d3b80e608e000427e80d2a8e37927ed7efbd24dcdc4ecc4a6cd0e53504\" returns successfully" Jul 7 00:21:43.849971 kubelet[2792]: E0707 00:21:43.849893 2792 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:21:44.215422 containerd[1559]: time="2025-07-07T00:21:44.214692651Z" level=info msg="CreateContainer within sandbox \"a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:21:44.234838 containerd[1559]: time="2025-07-07T00:21:44.230990622Z" level=info msg="Container 7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:21:44.245754 containerd[1559]: time="2025-07-07T00:21:44.245662610Z" level=info msg="CreateContainer within sandbox \"a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5\"" Jul 7 00:21:44.248471 containerd[1559]: time="2025-07-07T00:21:44.247688282Z" level=info msg="StartContainer for \"7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5\"" Jul 7 00:21:44.254377 containerd[1559]: time="2025-07-07T00:21:44.254321809Z" level=info msg="connecting to shim 7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5" address="unix:///run/containerd/s/0ba64a785c3a2c65912de265287d7adf478ae834d1ece5174d285355fff11e2b" protocol=ttrpc version=3 Jul 7 00:21:44.297140 systemd[1]: Started cri-containerd-7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5.scope - libcontainer container 7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5. Jul 7 00:21:44.381473 systemd[1]: cri-containerd-7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5.scope: Deactivated successfully. Jul 7 00:21:44.385328 containerd[1559]: time="2025-07-07T00:21:44.385003840Z" level=info msg="StartContainer for \"7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5\" returns successfully" Jul 7 00:21:44.386356 containerd[1559]: time="2025-07-07T00:21:44.386268406Z" level=info msg="received exit event container_id:\"7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5\" id:\"7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5\" pid:4705 exited_at:{seconds:1751847704 nanos:385256581}" Jul 7 00:21:44.386833 containerd[1559]: time="2025-07-07T00:21:44.386574541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5\" id:\"7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5\" pid:4705 exited_at:{seconds:1751847704 nanos:385256581}" Jul 7 00:21:44.426746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d64be2439a1ff22cf5f958a57e80dea9fd9b5707ca5cf27de3e587af7c661d5-rootfs.mount: Deactivated successfully. Jul 7 00:21:45.223156 containerd[1559]: time="2025-07-07T00:21:45.223098346Z" level=info msg="CreateContainer within sandbox \"a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:21:45.243709 containerd[1559]: time="2025-07-07T00:21:45.242644179Z" level=info msg="Container 816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:21:45.255439 containerd[1559]: time="2025-07-07T00:21:45.255383050Z" level=info msg="CreateContainer within sandbox \"a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287\"" Jul 7 00:21:45.256307 containerd[1559]: time="2025-07-07T00:21:45.256256238Z" level=info msg="StartContainer for \"816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287\"" Jul 7 00:21:45.258643 containerd[1559]: time="2025-07-07T00:21:45.258507058Z" level=info msg="connecting to shim 816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287" address="unix:///run/containerd/s/0ba64a785c3a2c65912de265287d7adf478ae834d1ece5174d285355fff11e2b" protocol=ttrpc version=3 Jul 7 00:21:45.298150 systemd[1]: Started cri-containerd-816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287.scope - libcontainer container 816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287. Jul 7 00:21:45.342083 systemd[1]: cri-containerd-816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287.scope: Deactivated successfully. Jul 7 00:21:45.344732 containerd[1559]: time="2025-07-07T00:21:45.344357311Z" level=info msg="received exit event container_id:\"816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287\" id:\"816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287\" pid:4745 exited_at:{seconds:1751847705 nanos:343991556}" Jul 7 00:21:45.344732 containerd[1559]: time="2025-07-07T00:21:45.344671282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287\" id:\"816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287\" pid:4745 exited_at:{seconds:1751847705 nanos:343991556}" Jul 7 00:21:45.357443 containerd[1559]: time="2025-07-07T00:21:45.357382796Z" level=info msg="StartContainer for \"816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287\" returns successfully" Jul 7 00:21:45.383012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-816f529919552d5a6ce53469fd4f2c49aeb906d53f23a11e3e4c33cb42c71287-rootfs.mount: Deactivated successfully. Jul 7 00:21:45.596938 kubelet[2792]: I0707 00:21:45.595917 2792 setters.go:602] "Node became not ready" node="ci-4344-1-1-da8d03a3a4d94a39b20f.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T00:21:45Z","lastTransitionTime":"2025-07-07T00:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 00:21:46.233515 containerd[1559]: time="2025-07-07T00:21:46.233365987Z" level=info msg="CreateContainer within sandbox \"a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:21:46.249266 containerd[1559]: time="2025-07-07T00:21:46.249048051Z" level=info msg="Container e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:21:46.267981 containerd[1559]: time="2025-07-07T00:21:46.267905852Z" level=info msg="CreateContainer within sandbox \"a340f19490e01fa74629c5856d28ad083d316e1444547c731688137f183b16b2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde\"" Jul 7 00:21:46.269010 containerd[1559]: time="2025-07-07T00:21:46.268697286Z" level=info msg="StartContainer for \"e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde\"" Jul 7 00:21:46.270682 containerd[1559]: time="2025-07-07T00:21:46.270641543Z" level=info msg="connecting to shim e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde" address="unix:///run/containerd/s/0ba64a785c3a2c65912de265287d7adf478ae834d1ece5174d285355fff11e2b" protocol=ttrpc version=3 Jul 7 00:21:46.315110 systemd[1]: Started cri-containerd-e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde.scope - libcontainer container e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde. Jul 7 00:21:46.387822 containerd[1559]: time="2025-07-07T00:21:46.387736101Z" level=info msg="StartContainer for \"e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde\" returns successfully" Jul 7 00:21:46.599307 containerd[1559]: time="2025-07-07T00:21:46.598186958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde\" id:\"4f77587b31b44d9dabb41bd8e3f30e550400076cf2dc3f471c900d531a2ef396\" pid:4812 exited_at:{seconds:1751847706 nanos:597597893}" Jul 7 00:21:47.060162 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 00:21:47.808096 containerd[1559]: time="2025-07-07T00:21:47.808039850Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde\" id:\"687d303e65e0665bcd14f0088ba6c9e706e739d2eaae7a76ff5324153306d16f\" pid:4889 exit_status:1 exited_at:{seconds:1751847707 nanos:807173041}" Jul 7 00:21:49.995619 containerd[1559]: time="2025-07-07T00:21:49.995545204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde\" id:\"e2549b0594950b208f1b48959bc1864b8d204ffe26f5dc7805b1b78b3b6befcc\" pid:5229 exit_status:1 exited_at:{seconds:1751847709 nanos:994490799}" Jul 7 00:21:50.474964 systemd-networkd[1473]: lxc_health: Link UP Jul 7 00:21:50.490702 systemd-networkd[1473]: lxc_health: Gained carrier Jul 7 00:21:50.752453 kubelet[2792]: I0707 00:21:50.752250 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-984wc" podStartSLOduration=8.752228247 podStartE2EDuration="8.752228247s" podCreationTimestamp="2025-07-07 00:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:21:47.259154625 +0000 UTC m=+123.892424831" watchObservedRunningTime="2025-07-07 00:21:50.752228247 +0000 UTC m=+127.385498441" Jul 7 00:21:51.555925 systemd-networkd[1473]: lxc_health: Gained IPv6LL Jul 7 00:21:52.330975 containerd[1559]: time="2025-07-07T00:21:52.330779068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde\" id:\"ffa7a35454c4286fcdb5783aeec4b85d5ecc95f045764eb9656df4da117c51e9\" pid:5362 exited_at:{seconds:1751847712 nanos:327982676}" Jul 7 00:21:53.584266 ntpd[1542]: Listen normally on 14 lxc_health [fe80::e80d:33ff:fe8b:795f%14]:123 Jul 7 00:21:53.584918 ntpd[1542]: 7 Jul 00:21:53 ntpd[1542]: Listen normally on 14 lxc_health [fe80::e80d:33ff:fe8b:795f%14]:123 Jul 7 00:21:54.555721 containerd[1559]: time="2025-07-07T00:21:54.555661925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde\" id:\"ef9a0c61113b4085eed59ea02369a3265bda80c2200c016ceead1293858e9868\" pid:5390 exited_at:{seconds:1751847714 nanos:554845216}" Jul 7 00:21:56.768173 containerd[1559]: time="2025-07-07T00:21:56.768123800Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde\" id:\"4372e33dcfcf17c19c1685d73cc2892963f9d47b043a487a80fd1ebdb458cd70\" pid:5421 exited_at:{seconds:1751847716 nanos:766964835}" Jul 7 00:21:58.942409 containerd[1559]: time="2025-07-07T00:21:58.942346865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e98892c0002a117f7f30ae3f038770dea7022fe129284a057dcea3caadeb7bde\" id:\"52c6e6a6e229d8353bf9eea2eb4859c046fb9e9e1711728592e277ce8d7df684\" pid:5445 exited_at:{seconds:1751847718 nanos:941785813}" Jul 7 00:21:58.993017 sshd[4683]: Connection closed by 139.178.68.195 port 50440 Jul 7 00:21:58.994063 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:59.000553 systemd[1]: sshd@28-10.128.0.28:22-139.178.68.195:50440.service: Deactivated successfully. Jul 7 00:21:59.004067 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 00:21:59.005521 systemd-logind[1548]: Session 29 logged out. Waiting for processes to exit. Jul 7 00:21:59.008466 systemd-logind[1548]: Removed session 29.